WorldWideScience

Sample records for learning sound recording

  1. Learning with Sound Recordings: A History of Suzuki's Mediated Pedagogy

    Science.gov (United States)

    Thibeault, Matthew D.

    2018-01-01

    This article presents a history of mediated pedagogy in the Suzuki Method, the first widespread approach to learning an instrument in which sound recordings were central. Media are conceptualized as socially constituted: philosophical ideas, pedagogic practices, and cultural values that together form a contingent and changing technological…

  2. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  3. Synchronized tapping facilitates learning sound sequences as indexed by the P300.

    Science.gov (United States)

    Kamiyama, Keiko S; Okanoya, Kazuo

    2014-01-01

    The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals' musical ability to coordinate their finger movements along with external auditory events.

  4. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  5. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  6. Sound and recording applications and theory

    CERN Document Server

    Rumsey, Francis

    2014-01-01

    Providing vital reading for audio students and trainee engineers, this guide is ideal for anyone who wants a solid grounding in both theory and industry practices in audio, sound and recording. There are many books on the market covering ""how to work it"" when it comes to audio equipment-but Sound and Recording isn't one of them. Instead, you'll gain an understanding of ""how it works"" with this approachable guide to audio systems.New to this edition:Digital audio section revised substantially to include the latest developments in audio networking (e.g. RAVENNA, AES X-192, AVB), high-resolut

  7. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  8. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  9. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  10. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  11. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  12. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768

  13. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.

  14. 37 CFR 380.3 - Royalty fees for the public performance of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... EPHEMERAL REPRODUCTIONS § 380.3 Royalty fees for the public performance of sound recordings and for ephemeral recordings. (a) Royalty rates and fees for eligible digital transmissions of sound recordings made...

  15. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population.

    Science.gov (United States)

    Bokov, Plamen; Mahut, Bruno; Flaud, Patrice; Delclaux, Christophe

    2016-03-01

    Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The relevance of visual information on learning sounds in infancy

    NARCIS (Netherlands)

    ter Schure, S.M.M.

    2016-01-01

    Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This

  17. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  18. Optical Reading and Playing of Sound Signals from Vinyl Records

    OpenAIRE

    Hensman, Arnold; Casey, Kevin

    2007-01-01

    While advanced digital music systems such as compact disk players and MP3 have become the standard in sound reproduction technology, critics claim that conversion to digital often results in a loss of sound quality and richness. For this reason, vinyl records remain the medium of choice for many audiophiles involved in specialist areas. The waveform cut into a vinyl record is an exact replica of the analogue version from the original source. However, while some perceive this media as reproduc...

  19. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    Science.gov (United States)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  20. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  1. Learning about the Dynamic Sun through Sounds

    Science.gov (United States)

    Quinn, M.; Peticolas, L. M.; Luhmann, J.; MacCallum, J.

    2008-06-01

    Can we hear the Sun or its solar wind? Not in the sense that they make sound. But we can take the particle, magnetic field, electric field, and image data and turn it into sound to demonstrate what the data tells us. We present work on turning data from the two-satellite NASA mission called STEREO (Solar TErrestrial RElations Observatory) into sounds and music (sonification). STEREO has two satellites orbiting the Sun near Earth's orbit to study the coronal mass ejections (CMEs) from the Corona. One sonification project aims to inspire musicians, museum patrons, and the public to learn more about CMEs by downloading STEREO data and using it to make music. We demonstrate the software and discuss the way in which it was developed. A second project aims to produce a museum exhibit using STEREO imagery and sounds from STEREO data. We demonstrate a "walk across the Sun" created for this exhibit so people can hear the features on solar images. We show how pixel intensity translates into pitches from selectable scales with selectable musical scale size and octave locations. We also share our successes and lessons learned.

  2. Similarity and pleasantness assessments of water-fountain sounds recorded in urban public spaces.

    Science.gov (United States)

    Ekman, Maria Rådsten; Lundén, Peter; Nilsson, Mats E

    2015-11-01

    Water fountains are potential tools for soundscape improvement, but little is known about their perceptual properties. To explore this, sounds were recorded from 32 fountains installed in urban parks. The sounds were recorded with a sound-field microphone and were reproduced using an ambisonic loudspeaker setup. Fifty-seven listeners assessed the sounds with regard to similarity and pleasantness. Multidimensional scaling of similarity data revealed distinct groups of soft variable and loud steady-state sounds. Acoustically, the soft variable sounds were characterized by low overall levels and high temporal variability, whereas the opposite pattern characterized the loud steady-state sounds. The perceived pleasantness of the sounds was negatively related to their overall level and positively related to their temporal variability, whereas spectral centroid was weakly correlated to pleasantness. However, the results of an additional experiment, using the same sounds set equal in overall level, found a negative relationship between pleasantness and spectral centroid, suggesting that spectral factors may influence pleasantness scores in experiments where overall level does not dominate pleasantness assessments. The equal-level experiment also showed that several loud steady-state sounds remained unpleasant, suggesting an inherently unpleasant sound character. From a soundscape design perspective, it may be advisable to avoid fountains generating such sounds.

  3. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  4. The Keyimage Method of Learning Sound-Symbol Correspondences: A Case Study of Learning Written Khmer

    Directory of Open Access Journals (Sweden)

    Elizabeth Lavolette

    2009-01-01

    Full Text Available I documented my strategies for learning sound-symbol correspondences during a Khmer course. I used a mnemonic strategy that I call the keyimage method. In this method, a character evokes an image (the keyimage, which evokes the corresponding sound. For example, the keyimage for the character 2 could be a swan with its head tucked in. This evokes the sound "kaw" that a swan makes, which sounds similar to the Khmer sound corresponding to 2. The method has some similarities to the keyword method. Considering the results of keyword studies, I hypothesize that the keyimage method is more effective than rote learning and that peer-generated keyimages are more effective than researcher- or teacher-generated keyimages, which are more effective than learner-generated ones. In Dr. Andrew Cohen's plenary presentation at the Hawaii TESOL 2007 conference, he mentioned that more case studies are needed on learning strategies (LSs. One reason to study LSs is that what learners do with input to produce output is unclear, and knowing what strategies learners use may help us understand that process (Dornyei, 2005, p. 170. Hopefully, we can use that knowledge to improve language learning, perhaps by teaching learners to use the strategies that we find. With that in mind, I have examined the LSs that I used in studying Khmer as a foreign language, focusing on learning the syllabic alphabet.

  5. Usability of Computerized Lung Auscultation-Sound Software (CLASS) for learning pulmonary auscultation.

    Science.gov (United States)

    Machado, Ana; Oliveira, Ana; Jácome, Cristina; Pereira, Marco; Moreira, José; Rodrigues, João; Aparício, José; Jesus, Luis M T; Marques, Alda

    2018-04-01

    The mastering of pulmonary auscultation requires complex acoustic skills. Computer-assisted learning tools (CALTs) have potential to enhance the learning of these skills; however, few have been developed for this purpose and do not integrate all the required features. Thus, this study aimed to assess the usability of a new CALT for learning pulmonary auscultation. Computerized Lung Auscultation-Sound Software (CLASS) usability was assessed by eight physiotherapy students using computer screen recordings, think-aloud reports, and facial expressions. Time spent in each task, frequency of messages and facial expressions, number of clicks and problems reported were counted. The timelines of the three methods used were matched/synchronized and analyzed. The tasks exercises and annotation of respiratory sounds were the ones requiring more clicks (median 132, interquartile range [23-157]; 93 [53-155]; 91 [65-104], respectively) and where most errors (19; 37; 15%, respectively) and problems (n = 7; 6; 3, respectively) were reported. Each participant reported a median of 6 problems, with a total of 14 different problems found, mainly related with CLASS functionalities (50%). Smile was the only facial expression presented in all tasks (n = 54). CLASS is the only CALT available that meets all the required features for learning pulmonary auscultation. The combination of the three usability methods identified advantages/disadvantages of CLASS and offered guidance for future developments, namely in annotations and exercises. This will allow the improvement of CLASS and enhance students' activities for learning pulmonary auscultation skills.

  6. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    Science.gov (United States)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  7. Tipping point analysis of a large ocean ambient sound record

    Science.gov (United States)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  8. 37 CFR 262.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... MAKING OF EPHEMERAL REPRODUCTIONS § 262.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) Basic royalty rate. Royalty rates and fees for eligible nonsubscription...

  9. Neural dynamics of learning sound-action associations.

    Directory of Open Access Journals (Sweden)

    Adam McNamara

    Full Text Available A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44 were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of

  10. Segmentation of expiratory and inspiratory sounds in baby cry audio recordings using hidden Markov models.

    Science.gov (United States)

    Aucouturier, Jean-Julien; Nonaka, Yulri; Katahira, Kentaro; Okanoya, Kazuo

    2011-11-01

    The paper describes an application of machine learning techniques to identify expiratory and inspiration phases from the audio recording of human baby cries. Crying episodes were recorded from 14 infants, spanning four vocalization contexts in their first 12 months of age; recordings from three individuals were annotated manually to identify expiratory and inspiratory sounds and used as training examples to segment automatically the recordings of the other 11 individuals. The proposed algorithm uses a hidden Markov model architecture, in which state likelihoods are estimated either with Gaussian mixture models or by converting the classification decisions of a support vector machine. The algorithm yields up to 95% classification precision (86% average), and its ability generalizes over different babies, different ages, and vocalization contexts. The technique offers an opportunity to quantify expiration duration, count the crying rate, and other time-related characteristics of baby crying for screening, diagnosis, and research purposes over large populations of infants.

  11. 37 CFR 261.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... § 261.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) For the period October 28, 1998, through December 31, 2002, royalty rates and fees for eligible digital...

  12. Machine learning analysis of binaural rowing sounds

    DEFF Research Database (Denmark)

    Johard, Leonard; Ruffaldi, Emanuele; Hoffmann, Pablo F.

    2011-01-01

    Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition metho...... methodology and the evaluation of different machine learning techniques for classifying rowing-sound data. We see that a combination of principal component analysis and shallow networks perform equally well as deep architectures, while being much faster to train.......Techniques for machine hearing are increasing their potentiality due to new application domains. In this work we are addressing the analysis of rowing sounds in natural context for the purpose of supporting a training system based on virtual environments. This paper presents the acquisition...

  13. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    Directory of Open Access Journals (Sweden)

    Sabine van der Ham

    2015-10-01

    Full Text Available When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  14. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  15. The Technique of the Sound Studio: Radio, Record Production, Television, and Film. Revised Edition.

    Science.gov (United States)

    Nisbett, Alec

    Detailed explanations of the studio techniques used in radio, record, television, and film sound production are presented in as non-technical language as possible. An introductory chapter discusses the physics and physiology of sound. Subsequent chapters detail standards for sound control in the studio; explain the planning and routine of a sound…

  16. Food approach conditioning and discrimination learning using sound cues in benthic sharks.

    Science.gov (United States)

    Vila Pouca, Catarina; Brown, Culum

    2018-07-01

    The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.

  17. 37 CFR 270.2 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  18. 37 CFR 370.3 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  19. Incidental learning of sound categories is impaired in developmental dyslexia.

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights

  20. Sound recordings of road maintenance equipment on the Lincoln National Forest, New Mexico

    Science.gov (United States)

    D. K. Delaney; T. G. Grubb

    2004-01-01

    The purpose of this pilot study was to record, characterize, and quantify road maintenance activity in Mexican spotted owl (Strix occidentalis lucida) habitat to gauge potential sound level exposure for owls during road maintenance activities. We measured sound levels from three different types of road maintenance equipment (rock crusherlloader,...

  1. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  2. Multi-Century Record of Anthropogenic Impacts on an Urbanized Mesotidal Estuary: Salem Sound, MA

    Science.gov (United States)

    Salem, MA, located north of Boston, has a rich, well-documented history dating back to settlement in 1626 CE, but the associated anthropogenic impacts on Salem Sound are poorly constrained. This project utilized dated sediment cores from the sound to assess the proxy record of an...

  3. Enabling Teachers to Develop Pedagogically Sound and Technically Executable Learning Designs

    NARCIS (Netherlands)

    Miao, Yongwu; Van der Klink, Marcel; Boon, Jo; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Van der Klink, M., Boon, J., Sloep, P. B., & Koper, R. (2009). Enabling Teachers to Develop Pedagogically Sound and Technically Executable Learning Designs [Special issue: Learning Design]. Distance Education, 30(2), 259-276.

  4. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Learning language with the wrong neural scaffolding: The cost of neural commitment to sounds.

    Directory of Open Access Journals (Sweden)

    Amy Sue Finn

    2013-11-01

    Full Text Available Does tuning to one’s native language explain the sensitive period for language learning? We explore the idea that tuning to (or becoming more selective for the properties of one’s native-language could result in being less open (or plastic for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure has an impact on the neural representation of a later-learned aspect (grammar. English-speaking adults learned one of two miniature artificial languages over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG. Across learners, recruitment of IFG (but not STG predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults’ difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language.

  6. Learning language with the wrong neural scaffolding: the cost of neural commitment to sounds

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.; Ettlinger, Marc; Vytlacil, Jason; D'Esposito, Mark

    2013-01-01

    Does tuning to one's native language explain the “sensitive period” for language learning? We explore the idea that tuning to (or becoming more selective for) the properties of one's native-language could result in being less open (or plastic) for tuning to the properties of a new language. To explore how this might lead to the sensitive period for grammar learning, we ask if tuning to an earlier-learned aspect of language (sound structure) has an impact on the neural representation of a later-learned aspect (grammar). English-speaking adults learned one of two miniature artificial languages (MALs) over 4 days in the lab. Compared to English, both languages had novel grammar, but only one was comprised of novel sounds. After learning a language, participants were scanned while judging the grammaticality of sentences. Judgments were performed for the newly learned language and English. Learners of the similar-sounds language recruited regions that overlapped more with English. Learners of the distinct-sounds language, however, recruited the Superior Temporal Gyrus (STG) to a greater extent, which was coactive with the Inferior Frontal Gyrus (IFG). Across learners, recruitment of IFG (but not STG) predicted both learning success in tests conducted prior to the scan and grammatical judgment ability during the scan. Data suggest that adults' difficulty learning language, especially grammar, could be due, at least in part, to the neural commitments they have made to the lower level linguistic components of their native language. PMID:24273497

  7. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  8. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  9. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    Science.gov (United States)

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association

  10. 77 FR 47120 - Distribution of 2011 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2012-08-07

    ... the motion to ascertain whether any claimant entitled to receive such royalty fees has a reasonable... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2012-3 CRB DD 2011] Distribution of 2011 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  11. 37 CFR 382.12 - Royalty fees for the public performance of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... Preexisting Satellite Digital Audio Radio Services § 382.12 Royalty fees for the public performance of sound recordings and the making of ephemeral recordings. (a) In general. The monthly royalty fee to be paid by a...

  12. 75 FR 3666 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-01-22

    ... additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound recordings.... 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in general... Collective, late fees, statements of account, audit and verification of royalty payments and distributions...

  13. "SMALLab": Virtual Geology Studies Using Embodied Learning with Motion, Sound, and Graphics

    Science.gov (United States)

    Johnson-Glenberg, Mina C.; Birchfield, David; Usyal, Sibel

    2009-01-01

    We present a new and innovative interface that allows the learner's body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory ("SMALLab") uses 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention occur when…

  14. 76 FR 56483 - Distribution of 2010 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2011-09-13

    ... responses to the motion to ascertain whether any claimant entitled to receive such royalty fees has a... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2011-6 CRB DD 2010] Distribution of 2010 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  15. SMALLab: virtual geology studies using embodied learning with motion, sound, and graphics

    NARCIS (Netherlands)

    Johnson-Glenberg, M.C.; Birchfield, D.A.; Uysal, S.

    2009-01-01

    We present a new and innovative interface that allows the learner’s body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory (SMALLab) uses 3D object tracking, real time graphics, and surround‐sound to enhance embodied learning. Our hypothesis is

  16. Medical education of attention: A qualitative study of learning to listen to sound.

    Science.gov (United States)

    Harris, Anna; Flynn, Eleanor

    2017-01-01

    There has been little qualitative research examining how physical examination skills are learned, particularly the sensory and subjective aspects of learning. The authors set out to study how medical students are taught and learn the skills of listening to sound. As part of an ethnographic study in Melbourne, 15 semi-structured in-depth interviews were conducted with students and teachers as a way to reflect explicitly on their learning and teaching. From these interviews, we found that learning the skills of listening to lung sounds was frequently difficult for students, with many experiencing awkwardness, uncertainty, pressure, and intimidation. However not everyone found this process difficult. Often those who had studied music reported finding it easier to be attentive to the frequency and rhythm of body sounds and find ways to describe them. By incorporating, distinctively in medical education, theoretical insights into "attentiveness" from anthropology and science and technology studies, the article suggests that musical education provides medical students with skills in sensory awareness. Training the senses is a critical aspect of diagnosis that needs to be better addressed in medical education. Practical approaches for improving students' education of attention are proposed.

  17. Difficulty in Learning Similar-Sounding Words: A Developmental Stage or a General Property of Learning?

    Science.gov (United States)

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning…

  18. Computer analysis of sound recordings from two Anasazi sites in northwestern New Mexico

    Science.gov (United States)

    Loose, Richard

    2002-11-01

    Sound recordings were made at a natural outdoor amphitheater in Chaco Canyon and in a reconstructed great kiva at Aztec Ruins. Recordings included computer-generated tones and swept sine waves, classical concert flute, Native American flute, conch shell trumpet, and prerecorded music. Recording equipment included analog tape deck, digital minidisk recorder, and direct digital recording to a laptop computer disk. Microphones and geophones were used as transducers. The natural amphitheater lies between the ruins of Pueblo Bonito and Chetro Ketl. It is a semicircular arc in a sandstone cliff measuring 500 ft. wide and 75 ft. high. The radius of the arc was verified with aerial photography, and an acoustic ray trace was generated using cad software. The arc is in an overhanging cliff face and brings distant sounds to a line focus. Along this line, there are unusual acoustic effects at conjugate foci. Time history analysis of recordings from both sites showed that a 60-dB reverb decay lasted from 1.8 to 2.0 s, nearly ideal for public performances of music. Echoes from the amphitheater were perceived to be upshifted in pitch, but this was not seen in FFT analysis. Geophones placed on the floor of the great kiva showed a resonance at 95 Hz.

  19. 37 CFR 260.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d..., 2007, a Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U...

  20. Surround by Sound: A Review of Spatial Audio Recording and Reproduction

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2017-05-01

    Full Text Available In this article, a systematic overview of various recording and reproduction techniques for spatial audio is presented. While binaural recording and rendering is designed to resemble the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears, soundfield recording and reproduction using a large number of microphones and loudspeakers replicate an acoustic scene within a region. These two fundamentally different types of techniques are discussed in the paper. A recent popular area, multi-zone reproduction, is also briefly reviewed in the paper. The paper is concluded with a discussion of the current state of the field and open problems.

  1. How Iconicity Helps People Learn New Words: Neural Correlates and Individual Differences in Sound-Symbolic Bootstrapping

    Directory of Open Access Journals (Sweden)

    Gwilym Lockwood

    2016-07-01

    Full Text Available Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound- symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences or the opposite meaning (in which form and meaning show cross-modal clashes. Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word

  2. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity

    Science.gov (United States)

    Warlaumont, Anne S.; Finnegan, Megan K.

    2016-01-01

    At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant’s nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model’s frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one’s own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop

  3. 37 CFR 270.1 - Notice of use of sound recordings under statutory license.

    Science.gov (United States)

    2010-07-01

    ..., and the primary purpose of the service is not to sell, advertise, or promote particular products or services other than sound recordings, live concerts, or other music-related events. (iv) A new subscription...

  4. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  5. [Effect of early scream sound stress on learning and memory in female rats].

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Huang, Chen

    2015-12-01

    To investigate the effect of early scream sound stress on the ability of spatial learning and memory, the levels of norepinephrine (NE) and corticosterone (CORT) in serum, and the morphology of adrenal gland.
 Female Sprague-Dawley (SD) rats were treated daily with scream sound from postnatal day 1(P1) for 21 d. Morris water maze was used to measure the spatial learning and memory ability. The levels of serum NE and CORT were determined by radioimmunoassay. Adrenal gland of SD rats was collected and fixed in formalin, and then embedded with paraffin. The morphology of adrenal gland was observed by HE staining.
 Exposure to early scream sound decreased latency of escape and increased times to cross the platform in Morris water maze test (Psound stress can enhance spatial learning and memory ability in adulthood, which is related to activation of the hypothalamo-pituitary-adrenal axis and sympathetic nervous system.

  6. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  7. The sound and the fury--bees hiss when expecting danger.

    Science.gov (United States)

    Wehmann, Henja-Niniane; Gustav, David; Kirkerud, Nicholas H; Galizia, C Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  8. 37 CFR 383.3 - Royalty fees for public performances of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... SUBSCRIPTION SERVICES § 383.3 Royalty fees for public performances of sound recordings and the making of... regulations for all years 2007 and earlier. Such fee shall be recoupable and credited against royalties due in...

  9. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  10. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    Science.gov (United States)

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  11. Enabling People Who Are Blind to Experience Science Inquiry Learning through Sound-Based Mediation

    Science.gov (United States)

    Levy, S. T.; Lahav, O.

    2012-01-01

    This paper addresses a central need among people who are blind, access to inquiry-based science learning materials, which are addressed by few other learning environments that use assistive technologies. In this study, we investigated ways in which learning environments based on sound mediation can support science learning by blind people. We used…

  12. The Use of Conceptual Change Text toward Students’ Argumentation Skills in Learning Sound

    Science.gov (United States)

    Sari, B. P.; Feranie, S.; Winarno, N.

    2017-09-01

    This research aim is to investigate the effect of Conceptual Change Text toward students’ argumentation skills in learning sound concept. The participant comes from one of International school in Bandung, Indonesia. The method that used in this research is a quasi-experimental design with one control group (N=21) and one experimental group (N=21) were involves in this research. The learning model that used in both classes is demonstration model which included teacher explanation and examples, the difference only in teaching materials. In experiment group learn with Conceptual Change Text, while control group learn with conventional book which is used in school. The results showed that Conceptual Change Text instruction was better than the conventional book to improved students’ argumentation skills of sound concept. Based on this results showed that Conceptual Change Text instruction can be an alternative tool to improve students’ argumentation skills significantly.

  13. The sound and the fury--bees hiss when expecting danger.

    Directory of Open Access Journals (Sweden)

    Henja-Niniane Wehmann

    Full Text Available Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees' sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees' hissing remain to be investigated.

  14. The Sound and the Fury—Bees Hiss when Expecting Danger

    Science.gov (United States)

    Galizia, C. Giovanni

    2015-01-01

    Honey bees are important model systems for the investigation of learning and memory and for a better understanding of the neuronal basics of brain function. Honey bees also possess a rich repertoire of tones and sounds, from queen piping and quacking to worker hissing and buzzing. In this study, we tested whether the worker bees’ sounds can be used as a measure of learning. We therefore conditioned honey bees aversively to odours in a walking arena and recorded both their sound production and their movement. Bees were presented with two odours, one of which was paired with an electric shock. Initially, the bees did not produce any sound upon odour presentation, but responded to the electric shock with a strong hissing response. After learning, many bees hissed at the presentation of the learned odour, while fewer bees hissed upon presentation of another odour. We also found that hissing and movement away from the conditioned odour are independent behaviours that can co-occur but do not necessarily do so. Our data suggest that hissing can be used as a readout for learning after olfactory conditioning, but that there are large individual differences between bees concerning their hissing reaction. The basis for this variability and the possible ecological relevance of the bees’ hissing remain to be investigated. PMID:25747702

  15. DESIGN AND APPLICATION OF SENSOR FOR RECORDING SOUNDS OVER HUMAN EYE AND NOSE

    NARCIS (Netherlands)

    JOURNEE, HL; VANBRUGGEN, AC; VANDERMEER, JJ; DEJONGE, AB; MOOIJ, JJA

    The recording of sounds over the oribt of the eye has been found to be useful in the detection of intracranial aneurysms. A hydrophone for auscultation over the eye has been developed and is tested under controlled conditions. The tests consist of measurement over the eyes in three healthy

  16. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  17. Specially Designed Sound-Boxes Used by Students to Perform School-Lab Sensor–Based Experiments, to Understand Sound Phenomena

    Directory of Open Access Journals (Sweden)

    Stefanos Parskeuopoulos

    2011-02-01

    Full Text Available The research presented herein investigates and records students’ perceptions relating to sound phenomena and their improvement during a specialised laboratory practice utilizing ICT and a simple experimental apparatus, especially designed for teaching. This school-lab apparatus and its operation are also described herein. A number of 71 first and second grade Vocational-school students, aged 16 to 20, participated in the research. These were divided into groups of 4-5 students, each of which worked for 6 hours in order to complete all activities assigned. Data collection was carried out through personal interviews as well as questionnaires which were distributed before and after the instructive intervention. The results shows that students’ active involvement with the simple teaching apparatus, through which the effects of sound waves are visible, helps them comprehend sound phenomena. It also altered considerably their initial misconceptions about sound propagation. The results are presented diagrammatically herein, while some important observations are made, relating to the teaching and learning of scientific concepts concerning sound.

  18. Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction

    Science.gov (United States)

    Miller, Robert E. (Robin)

    2005-04-01

    In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.

  19. Acoustic analysis of snoring sounds recorded with a smartphone according to obstruction site in OSAS patients.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Kim, Yang Jae; Moon, J I Seung; Kim, Young Jun; Jung, Sung Hoon

    2017-03-01

    Snoring is a sign of increased upper airway resistance and is the most common symptom suggestive of obstructive sleep apnea. Acoustic analysis of snoring sounds is a non-invasive diagnostic technique and may provide a screening test that can determine the location of obstruction sites. We recorded snoring sounds according to obstruction level, measured by DISE, using a smartphone and focused on the analysis of formant frequencies. The study group comprised 32 male patients (mean age 42.9 years). The spectrogram pattern, intensity (dB), fundamental frequencies (F 0 ), and formant frequencies (F 1 , F 2 , and F 3 ) of the snoring sounds were analyzed for each subject. On spectrographic analysis, retropalatal level obstruction tended to produce sharp and regular peaks, while retrolingual level obstruction tended to show peaks with a gradual onset and decay. On formant frequency analysis, F 1 (retropalatal level vs. retrolingual level: 488.1 ± 125.8 vs. 634.7 ± 196.6 Hz) and F 2 (retropalatal level vs. retrolingual level: 1267.3 ± 306.6 vs. 1723.7 ± 550.0 Hz) of retrolingual level obstructions showed significantly higher values than retropalatal level obstruction (p smartphone can be effective for recording snoring sounds.

  20. Letter-speech sound learning in children with dyslexia : From behavioral research to clinical practice

    NARCIS (Netherlands)

    Aravena, S.

    2017-01-01

    In alphabetic languages, learning to associate speech-sounds with unfamiliar characters is a critical step in becoming a proficient reader. This dissertation aimed at expanding our knowledge of this learning process and its relation to dyslexia, with an emphasis on bridging the gap between

  1. Production of grooming-associated sounds by chimpanzees (Pan troglodytes) at Ngogo: variation, social learning, and possible functions.

    Science.gov (United States)

    Watts, David P

    2016-01-01

    Chimpanzees (Pan troglodytes) use some communicative signals flexibly and voluntarily, with use influenced by learning. These signals include some vocalizations and also sounds made using the lips, oral cavity, and/or teeth, but not the vocal tract, such as "attention-getting" sounds directed at humans by captive chimpanzees and lip smacking during social grooming. Chimpanzees at Ngogo, in Kibale National Park, Uganda, make four distinct sounds while grooming others. Here, I present data on two of these ("splutters" and "teeth chomps") and consider whether social learning contributes to variation in their production and whether they serve social functions. Higher congruence in the use of these two sounds between dyads of maternal relatives than dyads of non-relatives implies that social learning occurs and mostly involves vertical transmission, but the results are not conclusive and it is unclear which learning mechanisms may be involved. In grooming between adult males, tooth chomps and splutters were more likely in long than in short bouts; in bouts that were bidirectional rather than unidirectional; in grooming directed toward high-ranking males than toward low-ranking males; and in bouts between allies than in those between non-allies. Males were also more likely to make these sounds while they were grooming other males than while they were grooming females. These results are expected if the sounds promote social bonds and induce tolerance of proximity and of grooming by high-ranking males. However, the alternative hypothesis that the sounds are merely associated with motivation to groom, with no additional social function, cannot be ruled out. Limited data showing that bouts accompanied by teeth chomping or spluttering at their initiation were longer than bouts for which this was not the case point toward a social function, but more data are needed for a definitive test. Comparison to other research sites shows that the possible existence of grooming

  2. 76 FR 45695 - Notice and Recordkeeping for Use of Sound Recordings Under Statutory License

    Science.gov (United States)

    2011-08-01

    ... operating under these licenses are required to, among other things, pay royalty fees and report to copyright... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Parts 370 and 382 [Docket No. RM 2011-5] Notice and Recordkeeping for Use of Sound Recordings Under Statutory License AGENCY: Copyright Royalty Board...

  3. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  4. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  5. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  6. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Science.gov (United States)

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  7. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Directory of Open Access Journals (Sweden)

    Wiktor eMlynarski

    2014-03-01

    Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.

  8. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  9. NOAA Climate Data Record (CDR) of Advanced Microwave Sounding Unit (AMSU)-A Brightness Temperature, Version 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Climate Data Record (CDR) for Advanced Microwave Sounding Unit-A (AMSU-A) brightness temperature in "window channels". The data cover a time period from...

  10. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  11. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  12. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  13. Prenatal complex rhythmic music sound stimulation facilitates postnatal spatial learning but transiently impairs memory in the domestic chick.

    Science.gov (United States)

    Kauser, H; Roy, S; Pal, A; Sreenivas, V; Mathur, R; Wadhwa, S; Jain, S

    2011-01-01

    Early experience has a profound influence on brain development, and the modulation of prenatal perceptual learning by external environmental stimuli has been shown in birds, rodents and mammals. In the present study, the effect of prenatal complex rhythmic music sound stimulation on postnatal spatial learning, memory and isolation stress was observed. Auditory stimulation with either music or species-specific sounds or no stimulation (control) was provided to separate sets of fertilized eggs from day 10 of incubation. Following hatching, the chicks at age 24, 72 and 120 h were tested on a T-maze for spatial learning and the memory of the learnt task was assessed 24 h after training. In the posthatch chicks at all ages, the plasma corticosterone levels were estimated following 10 min of isolation. The chicks of all ages in the three groups took less (p memory after 24 h of training, only the music-stimulated chicks at posthatch age 24 h took a significantly longer (p music sounds facilitates spatial learning, though the music stimulation transiently impairs postnatal memory. 2011 S. Karger AG, Basel.

  14. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  15. Not so primitive: context-sensitive meta-learning about unattended sound sequences.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa R; Cooper, Gavin; Heathcote, Andrew

    2013-01-01

    Mismatch negativity (MMN), an evoked response potential elicited when a "deviant" sound violates a regularity in the auditory environment, is integral to auditory scene processing and has been used to demonstrate "primitive intelligence" in auditory short-term memory. Using a new multiple-context and -timescale protocol we show that MMN magnitude displays a context-sensitive modulation depending on changes in the probability of a deviant at multiple temporal scales. We demonstrate a primacy bias causing asymmetric evidence-based modulation of predictions about the environment, and we demonstrate that learning how to learn about deviant probability (meta-learning) induces context-sensitive variation in the accessibility of predictive long-term memory representations that underpin the MMN. The existence of the bias and meta-learning are consistent with automatic attributions of behavioral salience governing relevance-filtering processes operating outside of awareness.

  16. Home recording for musicians for dummies

    CERN Document Server

    Strong, Jeff

    2008-01-01

    Invaluable advice that will be music to your ears! Are you thinking of getting started in home recording? Do you want to know the latest home recording technologies? Home Recording For Musicians For Dummies will get you recording music at home in no time. It shows you how to set up a home studio, record and edit your music, master it, and even distribute your songs. With this guide, you?ll learn how to compare studio-in-a-box, computer-based, and stand-alone recording systems and choose what you need. You?ll gain the skills to manage your sound, take full advantage of MIDI, m

  17. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  18. Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.

    Science.gov (United States)

    Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro

    2018-03-09

    Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.

  19. [Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918] / Tõnu Tannberg

    Index Scriptorium Estoniae

    Tannberg, Tõnu, 1961-

    2013-01-01

    Arvustus: Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918 (Das Baltikum in Geschichte und Gegenwart, 5). Hrsg. von Jaan Ross. Böhlau Verlag. Köln, Weimar und Wien 2012

  20. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  1. Auditory learning through active engagement with sound: Biological impact of community music lessons in at-risk children

    Directory of Open Access Journals (Sweden)

    Nina eKraus

    2014-11-01

    Full Text Available The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements in the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1,000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for one year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to an instrumental training class. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. These findings speak to the potential of active engagement with sound (i.e., music-making to engender experience-dependent neuroplasticity during trand may inform the development of strategies for auditory

  2. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    Science.gov (United States)

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  3. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  4. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  5. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  6. SoundScapes: non-formal learning potentials from interactive VEs

    DEFF Research Database (Denmark)

    Brooks, Tony; Petersson, Eva

    2007-01-01

    Non-formal learning is evident from an inhabited information space that is created from non-invasive multi-dimensional sensor technologies that source human gesture. Libraries of intuitive interfaces empower natural interaction where the gesture is mapped to the multisensory content. Large screen...... and international bodies have consistently recognized SoundScapes which, as a research body of work, is directly responsible for numerous patents. Please note that my full name is Anthony Lewis Brooks. I publish with Anthony Brooks: A. L. Brooks; Tony Brooks.  ...

  7. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  8. The Perception of Sounds in Phonographic Space

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    . The third chapter examines how listeners understand and make sense of phonographic space. In the form of a critique of Pierre Schaeffer and Roger Scruton’s notion of the acousmatic situation, I argue that our experience of recorded music has a twofold focus: the sound-in-itself and the sound’s causality...... the use of metaphors and image schemas in the experience and conceptualisation of phonographic space. With reference to descriptions of recordings by sound engineers, I argue that metaphors are central to our understanding of recorded music. This work is grounded in the tradition of cognitive linguistics......This thesis is about the perception of space in recorded music, with particular reference to stereo recordings of popular music. It explores how sound engineers create imaginary musical environments in which sounds appear to listeners in different ways. It also investigates some of the conditions...

  9. Reconstruction of mechanically recorded sound from an edison cylinder using three dimensional non-contact optical surface metrology

    Energy Technology Data Exchange (ETDEWEB)

    Fadeyev, V.; Haber, C.; Maul, C.; McBride, J.W.; Golden, M.

    2004-04-20

    Audio information stored in the undulations of grooves in a medium such as a phonograph disc record or cylinder may be reconstructed, without contact, by measuring the groove shape using precision optical metrology methods and digital image processing. The viability of this approach was recently demonstrated on a 78 rpm shellac disc using two dimensional image acquisition and analysis methods. The present work reports the first three dimensional reconstruction of mechanically recorded sound. The source material, a celluloid cylinder, was scanned using color coded confocal microscopy techniques and resulted in a faithful playback of the recorded information.

  10. Sound-symbolism boosts novel word learning

    NARCIS (Netherlands)

    Lockwood, G.F.; Dingemanse, M.; Hagoort, P.

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally

  11. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  12. Usefulness of bowel sound auscultation: a prospective evaluation.

    Science.gov (United States)

    Felder, Seth; Margel, David; Murrell, Zuri; Fleshner, Phillip

    2014-01-01

    Although the auscultation of bowel sounds is considered an essential component of an adequate physical examination, its clinical value remains largely unstudied and subjective. The aim of this study was to determine whether an accurate diagnosis of normal controls, mechanical small bowel obstruction (SBO), or postoperative ileus (POI) is possible based on bowel sound characteristics. Prospectively collected recordings of bowel sounds from patients with normal gastrointestinal motility, SBO diagnosed by computed tomography and confirmed at surgery, and POI diagnosed by clinical symptoms and a computed tomography without a transition point. Study clinicians were instructed to categorize the patient recording as normal, obstructed, ileus, or not sure. Using an electronic stethoscope, bowel sounds of healthy volunteers (n = 177), patients with SBO (n = 19), and patients with POI (n = 15) were recorded. A total of 10 recordings randomly selected from each category were replayed through speakers, with 15 of the recordings duplicated to surgical and internal medicine clinicians (n = 41) blinded to the clinical scenario. The sensitivity, positive predictive value, and intra-rater variability were determined based on the clinician's ability to properly categorize the bowel sound recording when blinded to additional clinical information. Secondary outcomes were the clinician's perceived level of expertise in interpreting bowel sounds. The overall sensitivity for normal, SBO, and POI recordings was 32%, 22%, and 22%, respectively. The positive predictive value of normal, SBO, and POI recordings was 23%, 28%, and 44%, respectively. Intra-rater reliability of duplicated recordings was 59%, 52%, and 53% for normal, SBO, and POI, respectively. No statistically significant differences were found between the surgical and internal medicine clinicians for sensitivity, positive predictive value, or intra-rater variability. Overall, 44% of clinicians reported that they rarely listened

  13. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  14. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  15. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  16. The Changing Role of Sound-Symbolism for Small Versus Large Vocabularies.

    Science.gov (United States)

    Brand, James; Monaghan, Padraic; Walker, Peter

    2017-12-12

    Natural language contains many examples of sound-symbolism, where the form of the word carries information about its meaning. Such systematicity is more prevalent in the words children acquire first, but arbitrariness dominates during later vocabulary development. Furthermore, systematicity appears to promote learning category distinctions, which may become more important as the vocabulary grows. In this study, we tested the relative costs and benefits of sound-symbolism for word learning as vocabulary size varies. Participants learned form-meaning mappings for words which were either congruent or incongruent with regard to sound-symbolic relations. For the smaller vocabulary, sound-symbolism facilitated learning individual words, whereas for larger vocabularies sound-symbolism supported learning category distinctions. The changing properties of form-meaning mappings according to vocabulary size may reflect the different ways in which language is learned at different stages of development. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  17. Investigating the relationship between pressure force and acoustic waveform in footstep sounds

    DEFF Research Database (Denmark)

    Grani, Francesco; Serafin, Stefania; Götzen, Amalia De

    2013-01-01

    In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair o...... of sandals embedded with six pressure sensors each. Investigations of the relationships between recorded force and footstep sounds is presented, together with several possible applications of the system.......In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair...

  18. Learning for Everyday Life: Students' standpoints on loud sounds and use of hearing protectors before and after a teaching-learning intervention

    Science.gov (United States)

    West, Eva

    2012-11-01

    Researchers have highlighted the increasing problem of loud sounds among young people in leisure-time environments, recently even emphasizing portable music players, because of the risk of suffering from hearing impairments such as tinnitus. However, there is a lack of studies investigating compulsory-school students' standpoints and explanations in connection with teaching interventions integrating school subject content with auditory health. In addition, there are few health-related studies in the international science education literature. This paper explores students' standpoints on loud sounds including the use of hearing-protection devices in connection with a teaching intervention based on a teaching-learning sequence about sound, hearing and auditory health. Questionnaire data from 199 students, in grades 4, 7 and 8 (aged 10-14), from pre-, post- and delayed post-tests were analysed. Additionally, information on their experiences of tinnitus as well as their listening habits regarding portable music players was collected. The results show that more students make healthier choices in questions of loud sounds after the intervention, and especially among the older ones this result remains or is further improved one year later. There are also signs of positive behavioural change in relation to loud sounds. Significant gender differences are found; generally, the girls show more healthy standpoints and expressions than boys do. If this can be considered to be an outcome of students' improved and integrated knowledge about sound, hearing and health, then this emphasizes the importance of integrating health issues into regular school science.

  19. Sound production in recorder-like instruments : II. a simulation model

    NARCIS (Netherlands)

    Verge, M.P.; Hirschberg, A.; Causse, R.

    1997-01-01

    A simple one-dimensional representation of recorderlike instruments, that can be used for sound synthesis by physical modeling of flutelike instruments, is presented. This model combines the effects on the sound production by the instrument of the jet oscillations, vortex shedding at the edge of the

  20. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  1. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  2. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  3. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  4. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  5. 37 CFR 382.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.2 Royalty fees for the... monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d)(2) and the...

  6. The cinematic soundscape: conceptualising the use of sound in Indian films

    OpenAIRE

    Budhaditya Chattopadhyay

    2012-01-01

    This article examines the trajectories of sound practice in Indian cinema and conceptualises the use of sound since the advent of talkies. By studying and analysing a number of sound- films from different technological phases of direct recording, magnetic recording and present- day digital recording, the article proposes three corresponding models that are developed on the basis of observations on the use of sound in Indian cinema. These models take their point of departure in specific phases...

  7. Sound For Animation And Virtual Reality

    Science.gov (United States)

    Hahn, James K.; Docter, Pete; Foster, Scott H.; Mangini, Mark; Myers, Tom; Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1995-01-01

    Sound is an integral part of the experience in computer animation and virtual reality. In this course, we will present some of the important technical issues in sound modeling, rendering, and synchronization as well as the "art" and business of sound that are being applied in animations, feature films, and virtual reality. The central theme is to bring leading researchers and practitioners from various disciplines to share their experiences in this interdisciplinary field. The course will give the participants an understanding of the problems and techniques involved in producing and synchronizing sounds, sound effects, dialogue, and music. The problem spans a number of domains including computer animation and virtual reality. Since sound has been an integral part of animations and films much longer than for computer-related domains, we have much to learn from traditional animation and film production. By bringing leading researchers and practitioners from a wide variety of disciplines, the course seeks to give the audience a rich mixture of experiences. It is expected that the audience will be able to apply what they have learned from this course in their research or production.

  8. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  9. Acoustic analyses of speech sounds and rhythms in Japanese- and English-learning infants

    Directory of Open Access Journals (Sweden)

    Yuko eYamashita

    2013-02-01

    Full Text Available The purpose of this study was to explore developmental changes, in terms of spectral fluctuations and temporal periodicity with Japanese- and English-learning infants. Three age groups (15, 20, and 24 months were selected, because infants diversify phonetic inventories with age. Natural speech of the infants was recorded. We utilized a critical-band-filter bank, which simulated the frequency resolution in adults’ auditory periphery. First, the correlations between the critical-band outputs represented by factor analysis were observed in order to see how the critical bands should be connected to each other, if a listener is to differentiate sounds in infants’ speech. In the following analysis, we analyzed the temporal fluctuations of factor scores by calculating autocorrelations. The present analysis identified three factors observed in adult speech at 24 months of age in both linguistic environments. These three factors were shifted to a higher frequency range corresponding to the smaller vocal tract size of the infants. The results suggest that the vocal tract structures of the infants had developed to become adult-like configuration by 24 months of age in both language environments. The amount of utterances with periodic nature of shorter time increased with age in both environments. This trend was clearer in the Japanese environment.

  10. The Early Years: Becoming Attuned to Sound

    Science.gov (United States)

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  11. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  12. A machine learning approach to create blocking criteria for record linkage.

    Science.gov (United States)

    Giang, Phan H

    2015-03-01

    Record linkage, a part of data cleaning, is recognized as one of most expensive steps in data warehousing. Most record linkage (RL) systems employ a strategy of using blocking filters to reduce the number of pairs to be matched. A blocking filter consists of a number of blocking criteria. Until recently, blocking criteria are selected manually by domain experts. This paper proposes a new method to automatically learn efficient blocking criteria for record linkage. Our method addresses the lack of sufficient labeled data for training. Unlike previous works, we do not consider a blocking filter in isolation but in the context of an accompanying matcher which is employed after the blocking filter. We show that given such a matcher, the labels (assigned to record pairs) that are relevant for learning are the labels assigned by the matcher (link/nonlink), not the labels assigned objectively (match/unmatch). This conclusion allows us to generate an unlimited amount of labeled data for training. We formulate the problem of learning a blocking filter as a Disjunctive Normal Form (DNF) learning problem and use the Probably Approximately Correct (PAC) learning theory to guide the development of algorithm to search for blocking filters. We test the algorithm on a real patient master file of 2.18 million records. The experimental results show that compared with filters obtained by educated guess, the optimal learned filters have comparable recall but reduce throughput (runtime) by an order-of-magnitude factor.

  13. Jamming and learning

    DEFF Research Database (Denmark)

    Brinck, Lars

    2017-01-01

    -academy students ‘sitting in’. Fieldwork was documented through sound recordings, diaries, and field notes from participant observation and informal interviews. Analyses apply a situated learning theoretical perspective on the band members’ as well as the students’ participation and reveal important learning...... to take place. Analyses also indicate the musicians’ changing participation being analytically inseparable from the changing music itself. The study’s final argument is two-fold: Revitalising jamming as a studio-recording practice within popular music highlights important aspects of professional musicians......’ interactive communication processes. And transferring this artistic endeavour into an educational practice suggests an increased focus on students ‘sitting in’ with professional bands, and teachers playing alongside with students....

  14. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    NARCIS (Netherlands)

    Fraga González, G.; Žarić, G.; Tijms, J.; Bonte, M.; van der Molen, M.W.

    We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS) associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for

  15. Digital recording as a teaching and learning method in the skills laboratory.

    Science.gov (United States)

    Strand, Ingebjørg; Gulbrandsen, Lise; Slettebø, Åshild; Nåden, Dagfinn

    2017-09-01

    To obtain information on how nursing students react to, think about and learn from digital recording as a learning and teaching method over time. Based on the teaching and learning philosophy of the university college, we used digital recording as a tool in our daily sessions in skills laboratory. However, most of the studies referred to in the background review had a duration of from only a few hours to a number of days. We found it valuable to design a study with a duration of two academic semesters. A descriptive and interpretative design was used. First-year bachelor-level students at the department of nursing participated in the study. Data collection was carried out by employing an 'online questionnaire'. The students answered five written, open-ended questions after each of three practical skill sessions. Kvale and Brinkmann's three levels of understanding were employed in the analysis. The students reported that digital recording affected factors such as feeling safe, secure and confident and that video recording was essential in learning and training practical skills. The use of cameras proved to be useful, as an expressive tool for peer learning because video recording enhances self-assessment, reflection, sensing, psychomotor performance and discovery learning. Digital recording enhances the student's awareness when acquiring new knowledge because it activates cognitive and emotional learning. The connection between tutoring, feedback and technology was clear. The digital recorder gives students direct and immediate feedback on their performance from the various practical procedures, and may aid in the transition from theory to practice. Students experienced more self-confidence and a feeling of safety in their performances. © 2016 John Wiley & Sons Ltd.

  16. Visual bias in subjective assessments of automotive sounds

    DEFF Research Database (Denmark)

    Ellermeier, Wolfgang; Legarth, Søren Vase

    2006-01-01

    In order to evaluate how strong the influence of visual input on sound quality evaluation may be, a naive sample of 20 participants was asked to judge interior automotive sound recordings while simultaneously being exposed to pictures of cars. twenty-two recordings of second-gear acceleration...

  17. Records management: a basis for organizational learning and innovation

    Directory of Open Access Journals (Sweden)

    Francisco José Aragão Pedroza Cunha

    Full Text Available The understanding of (transformations related to organizational learning processes and knowledge recording can promote innovation. The objective of this study was to review the conceptual contributions of several studies regarding Organizational Learning and Records Management and to highlight the importance of knowledge records as an advanced management technique for the development and attainment of innovation. To accomplish this goal, an exploratory and multidisciplinary literature review was conducted. The results indicated that the identification and application of management models to represent knowledge is a challenge for organizations aiming to promote conditions for the creation and use of knowledge in order to transform it into organizational innovation. Organizations can create spaces and environments for local, regional, national, and global exchange with the strategic goal of generating and sharing knowledge, provided they know how to utilize Records Management mechanisms.

  18. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0088063)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  19. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0077910)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  20. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  1. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  2. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  3. Why live recording sounds better: A case study of Schumann’s Träumerei

    Directory of Open Access Journals (Sweden)

    Haruka eShoda

    2015-01-01

    Full Text Available We explore the concept that artists perform best in front of an audience. The negative effects of performance anxiety are much better known than their related cousin on the other shoulder: the positive effects of social facilitation. The present study, however, reveals a listener's preference for performances recorded in front of an audience. In Study 1, we prepared two types of recordings of Träumerei performed by 13 pianists: recordings in front of an audience and those with no audience. According to the evaluation by 153 listeners, the recordings performed in front of an audience sounded better, suggesting that the presence of an audience enhanced or facilitated the performance. In Study 2, we analyzed pianists' durational and dynamic expressions. According to the functional principal components analyses, we found that the expression of Träumerei consisted of three components: the overall quantity, the cross-sectional contrast between the final and the remaining sections, and the control of the expressive variability. Pianists' expressions were targeted more to the average of the cross-sectional variation in the audience-present than in the audience-absent recordings. In Study 3, we explored a model that explained listeners' responses induced by pianists' acoustical expressions, using path analyses. The final model indicated that the cross-sectional variation of the duration and that of the dynamics determined listeners' evaluations of the quality and the emotionally moving experience, respectively. In line with human's preferences for commonality, the more average the durational expressions were in live recording, the better the listeners' evaluations were regardless of their musical experiences. Only the well-experienced listeners (at least 16 years of musical training were moved more by the deviated dynamic expressions in live recording, suggesting a link between the experienced listener's emotional experience and the unique dynamics in

  4. Sound as Affective Design Feature in Multimedia Learning--Benefits and Drawbacks from a Cognitive Load Theory Perspective

    Science.gov (United States)

    Königschulte, Anke

    2015-01-01

    The study presented in this paper investigates the potential effects of including non-speech audio such as sound effects into multimedia-based instruction taking into account Sweller's cognitive load theory (Sweller, 2005) and applied frameworks such as the cognitive theory of multimedia learning (Mayer, 2005) and the cognitive affective theory of…

  5. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    Science.gov (United States)

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  6. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  7. Theory-based Support for Mobile Language Learning: Noticing and Recording

    Directory of Open Access Journals (Sweden)

    Agnes Kukulska-Hulme

    2009-04-01

    Full Text Available This paper considers the issue of 'noticing' in second language acquisition, and argues for the potential of handheld devices to: (i support language learners in noticing and recording noticed features 'on the spot', to help them develop their second language system; (ii help language teachers better understand the specific difficulties of individuals or those from a particular language background; and (iii facilitate data collection by applied linguistics researchers, which can be fed back into educational applications for language learning. We consider: theoretical perspectives drawn from the second language acquisition literature, relating these to the practice of writing language learning diaries; and the potential for learner modelling to facilitate recording and prompting noticing in mobile assisted language learning contexts. We then offer guidelines for developers of mobile language learning solutions to support the development of language awareness in learners.

  8. Listening panel agreement and characteristics of lung sounds digitally recorded from children aged 1–59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case–control study

    Science.gov (United States)

    Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L

    2017-01-01

    Introduction Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Methods Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1–59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Results Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Conclusions Listening panel and case–control data suggests our methodology is feasible, likely valid

  9. Chronic early postnatal scream sound stress induces learning deficits and NMDA receptor changes in the hippocampus of adult mice.

    Science.gov (United States)

    Hu, Lili; Han, Bo; Zhao, Xiaoge; Mi, Lihua; Song, Qiang; Wang, Jue; Song, Tusheng; Huang, Chen

    2016-04-13

    Chronic scream sounds during adulthood affect spatial learning and memory, both of which are sexually dimorphic. The long-term effects of chronic early postnatal scream sound stress (SSS) during postnatal days 1-21 (P1-P21) on spatial learning and memory in adult mice as well as whether or not these effects are sexually dimorphic are unknown. Therefore, the present study examines the performance of adult male and female mice in the Morris water maze following exposure to chronic early postnatal SSS. Hippocampal NR2A and NR2B levels as well as NR2A/NR2B subunit ratios were tested using immunohistochemistry. In the Morris water maze, stress males showed greater impairment in spatial learning and memory than background males; by contrast, stress and background females performed equally well. NR2B levels in CA1 and CA3 were upregulated, whereas NR2A/NR2B ratios were downregulated in stressed males, but not in females. These data suggest that chronic early postnatal SSS influences spatial learning and memory ability, levels of hippocampal NR2B, and NR2A/NR2B ratios in adult males. Moreover, chronic early stress-induced alterations exert long-lasting effects and appear to affect performance in a sex-specific manner.

  10. Chaotic dynamics of respiratory sounds

    International Nuclear Information System (INIS)

    Ahlstrom, C.; Johansson, A.; Hult, P.; Ask, P.

    2006-01-01

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D 2 ) and the Kaplan-Yorke dimension (D KY ) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data

  11. Chaotic dynamics of respiratory sounds

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, C. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden) and Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)]. E-mail: christer@imt.liu.se; Johansson, A. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Hult, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden); Ask, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)

    2006-09-15

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D {sub 2}) and the Kaplan-Yorke dimension (D {sub KY}) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data.

  12. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    scientists with that of numerical mathematicians studying sonification, psychologists, linguists, bioacousticians, and musicians to illuminate the structure of sound from different angles. Each of these disciplines deals with the use of sound to carry a different sort of information, under different requirements and constraints. By combining their insights, we can learn to understand of the structure of sound in general.

  13. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  14. Digitizing Sound: How Can Sound Waves be Turned into Ones and Zeros?

    Science.gov (United States)

    Vick, Matthew

    2010-10-01

    From MP3 players to cell phones to computer games, we're surrounded by a constant stream of ones and zeros. Do we really need to know how this technology works? While nobody can understand everything, digital technology is increasingly making our lives a collection of "black boxes" that we can use but have no idea how they work. Pursuing scientific literacy should propel us to open up a few of these metaphorical boxes. High school physics offers opportunities to connect the curriculum to sports, art, music, and electricity, but it also offers connections to computers and digital music. Learning activities about digitizing sounds offer wonderful opportunities for technology integration and student problem solving. I used this series of lessons in high school physics after teaching about waves and sound but before optics and total internal reflection so that the concepts could be further extended when learning about fiber optics.

  15. Multi-sensory learning and learning to read.

    Science.gov (United States)

    Blomert, Leo; Froyen, Dries

    2010-09-01

    The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar

  16. Recognition and characterization of unstructured environmental sounds

    Science.gov (United States)

    Chu, Selina

    2011-12-01

    be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to

  17. A terrified-sound stress induced proteomic changes in adult male rat hippocampus.

    Science.gov (United States)

    Yang, Juan; Hu, Lili; Wu, Qiuhua; Liu, Liying; Zhao, Lingyu; Zhao, Xiaoge; Song, Tusheng; Huang, Chen

    2014-04-10

    In this study, we investigated the biochemical mechanisms in the adult rat hippocampus underlying the relationship between a terrified-sound induced psychological stress and spatial learning. Adult male rats were exposed to a terrified-sound stress, and the Morris water maze (MWM) has been used to evaluate changes in spatial learning and memory. The protein expression profile of the hippocampus was examined using two-dimensional gel electrophoresis (2DE), matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, and bioinformatics analysis. The data from the MWM tests suggested that a terrified-sound stress improved spatial learning. The proteomic analysis revealed that the expression of 52 proteins was down-regulated, while that of 35 proteins were up-regulated, in the hippocampus of the stressed rats. We identified and validated six of the most significant differentially expressed proteins that demonstrated the greatest stress-induced changes. Our study provides the first evidence that a terrified-sound stress improves spatial learning in rats, and that the enhanced spatial learning coincides with changes in protein expression in rat hippocampus. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    Science.gov (United States)

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  19. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  20. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  1. Feasibility of an electronic stethoscope system for monitoring neonatal bowel sounds.

    Science.gov (United States)

    Dumas, Jasmine; Hill, Krista M; Adrezin, Ronald S; Alba, Jorge; Curry, Raquel; Campagna, Eric; Fernandes, Cecilia; Lamba, Vineet; Eisenfeld, Leonard

    2013-09-01

    Bowel dysfunction remains a major problem in neonates. Traditional auscultation of bowel sounds as a diagnostic aid in neonatal gastrointestinal complications is limited by skill and inability to document and reassess. Consequently, we built a unique prototype to investigate the feasibility of an electronic monitoring system for continuous assessment of bowel sounds. We attained approval by the Institutional Review Boards for the investigational study to test our system. The system incorporated a prototype stethoscope head with a built-in microphone connected to a digital recorder. Recordings made over extended periods were evaluated for quality. We also considered the acoustic environment of the hospital, where the stethoscope was used. The stethoscope head was attached to the abdomen with a hydrogel patch designed especially for this purpose. We used the system to obtain recordings from eight healthy, full-term babies. A scoring system was used to determine loudness, clarity, and ease of recognition comparing it to the traditional stethoscope. The recording duration was initially two hours and was increased to a maximum of eight hours. Median duration of attachment was three hours (3.75, 2.68). Based on the scoring, the bowel sound recording was perceived to be as loud and clear in sound reproduction as a traditional stethoscope. We determined that room noise and other noises were significant forms of interference in the recordings, which at times prevented analysis. However, no sound quality drift was noted in the recordings and no patient discomfort was noted. Minimal erythema was observed over the fixation site which subsided within one hour. We demonstrated the long-term recording of infant bowel sounds. Our contributions included a prototype stethoscope head, which was affixed using a specially designed hydrogel adhesive patch. Such a recording can be reviewed and reassessed, which is new technology and an improvement over current practice. The use of this

  2. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  3. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  4. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  5. Improving auscultatory proficiency using computer simulated heart sounds

    Directory of Open Access Journals (Sweden)

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  6. The strategic use of lecture recordings to facilitate an active and self-directed learning approach.

    Science.gov (United States)

    Topale, Luminica

    2016-08-12

    New learning technologies have the capacity to dramatically impact how students go about learning and to facilitate an active, self-directed learning approach. In U. S. medical education, students encounter a large volume of content, which must be mastered at an accelerated pace. The added pressure to excel on the USMLE Step 1 licensing exam and competition for residency placements, require that students adopt an informed approach to the use of learning technologies so as to enhance rather than to detract from the learning process. The primary aim of this study was to gain a better understanding of how students were using recorded lectures in their learning and how their study habits have been influenced by the technology. Survey research was undertaken using a convenience sample. Students were asked to voluntarily participate in an electronic survey comprised of 27 closed ended, multiple choice questions, and one open ended item. The survey was designed to explore students' perceptions of how recorded lectures affected their choices regarding class participation and impacted their learning and to gain an understanding of how recorded lectures facilitated a strategic, active learning process. Findings revealed that recorded lectures had little influence on students' choices to participate, and that the perceived benefits of integrating recorded lectures into study practices were related to their facilitation of and impact on efficient, active, and self-directed learning. This study was a useful investigation into how the availability of lecture capture technology influenced medical students' study behaviors and how students were making valuable use of the technology as an active learning tool.

  7. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  8. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  9. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  10. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  11. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  12. THE SOUND OF CINEMA: TECHNOLOGY AND CREATIVITY

    Directory of Open Access Journals (Sweden)

    Poznin Vitaly F.

    2017-12-01

    Full Text Available Technology is a means of creating any product. However, in the onscreen art, it is one of the elements creating the art space of film. Considering the main stages of the development of cinematography, this article explores the influence of technology of sound recording on the creating a special artistic and physical space of film (the beginning of the use a sound in movies; the mastering the artistic means of an audiovisual work; the expansion of the spatial characteristics for the screen sound; and the sound in a modern cinema. Today, thanks to new technologies, the sound in a cinema forms a specific quasirealistic landscape, greatly enhancing the impact on the viewer of the virtual screen images.

  13. Detecting the temporal structure of sound sequences in newborn infants

    NARCIS (Netherlands)

    Háden, G.P.; Honing, H.; Török, M.; Winkler, I.

    2015-01-01

    Most high-level auditory functions require one to detect the onset and offset of sound sequences as well as registering the rate at which sounds are presented within the sound trains. By recording event-related brain potentials to onsets and offsets of tone trains as well as to changes in the

  14. Real, foley or synthetic? An evaluation of everyday walking sounds

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Grani, Francesco

    2013-01-01

    in using foley sounds for a film track. In particular this work focuses on walking sounds: five different scenes of a walking person were video recorded and each video was then mixed with the three different kind of sounds mentioned above. Subjects were asked to recognise and describe the action performed...

  15. Brain activation during anticipation of sound sequences.

    Science.gov (United States)

    Leaver, Amber M; Van Lare, Jennifer; Zielinski, Brandon; Halpern, Andrea R; Rauschecker, Josef P

    2009-02-25

    Music consists of sound sequences that require integration over time. As we become familiar with music, associations between notes, melodies, and entire symphonic movements become stronger and more complex. These associations can become so tight that, for example, hearing the end of one album track can elicit a robust image of the upcoming track while anticipating it in total silence. Here, we study this predictive "anticipatory imagery" at various stages throughout learning and investigate activity changes in corresponding neural structures using functional magnetic resonance imaging. Anticipatory imagery (in silence) for highly familiar naturalistic music was accompanied by pronounced activity in rostral prefrontal cortex (PFC) and premotor areas. Examining changes in the neural bases of anticipatory imagery during two stages of learning conditional associations between simple melodies, however, demonstrates the importance of fronto-striatal connections, consistent with a role of the basal ganglia in "training" frontal cortex (Pasupathy and Miller, 2005). Another striking change in neural resources during learning was a shift between caudal PFC earlier to rostral PFC later in learning. Our findings regarding musical anticipation and sound sequence learning are highly compatible with studies of motor sequence learning, suggesting common predictive mechanisms in both domains.

  16. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  17. Musical Sounds, Motor Resonance, and Detectable Agency

    Directory of Open Access Journals (Sweden)

    Jacques Launay

    2015-09-01

    Full Text Available This paper discusses the paradox that while human music making evolved and spread in an environment where it could only occur in groups, it is now often apparently an enjoyable asocial phenomenon. Here I argue that music is, by definition, sound that we believe has been in some way organized by a human agent, meaning that listening to any musical sounds can be a social experience. There are a number of distinct mechanisms by which we might associate musical sound with agency. While some of these mechanisms involve learning motor associations with that sound, it is also possible to have a more direct relationship from musical sound to agency, and the relative importance of these potentially independent mechanisms should be further explored. Overall, I conclude that the apparent paradox of solipsistic musical engagement is in fact unproblematic, because the way that we perceive and experience musical sounds is inherently social.

  18. Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.

    Science.gov (United States)

    Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro

    2009-07-01

    To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.

  19. Exploring Sound with Insects

    Science.gov (United States)

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  20. Learning a Health Knowledge Graph from Electronic Medical Records.

    Science.gov (United States)

    Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David

    2017-07-20

    Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).

  1. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    2010-01-01

    We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions, while soundscapes reproduce the characteristic soundmarks...... as well as self-induced interactive sounds simulated using physical models. Results show that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment....

  2. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  3. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  4. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  5. Automatic Sound Generation for Spherical Objects Hitting Straight Beams Based on Physical Models.

    Science.gov (United States)

    Rauterberg, M.; And Others

    Sounds are the result of one or several interactions between one or several objects at a certain place and in a certain environment; the attributes of every interaction influence the generated sound. The following factors influence users in human/computer interaction: the organization of the learning environment, the content of the learning tasks,…

  6. A single episode of high intensity sound inhibits long-term potentiation in the hippocampus of rats.

    Science.gov (United States)

    de Deus, J L; Cunha, A O S; Terzian, A L; Resstel, L B; Elias, L L K; Antunes-Rodrigues, J; Almeida, S S; Leão, R M

    2017-10-26

    Exposure to loud sounds has become increasingly common. The most common consequences of loud sound exposure are deafness and tinnitus, but emotional and cognitive problems are also associated with loud sound exposure. Loud sounds can activate the hipothalamic-pituitary-adrenal axis resulting in the secretion of corticosterone, which affects hippocampal synaptic plasticity. Previously we have shown that long-term exposure to short episodes of high intensity sound inhibited hippocampal long-term potentiation (LTP) without affecting spatial learning and memory. Here we aimed to study the impact of short term loud sound exposure on hippocampal synaptic plasticity and function. We found that a single minute of 110 dB sound inhibits hippocampal Schaffer-CA1 LTP for 24 hours. This effect did not occur with an 80-dB sound exposure, was not correlated with corticosterone secretion and was also observed in the perforant-dentate gyrus synapse. We found that despite the deficit in the LTP these animals presented normal spatial learning and memory and fear conditioning. We conclude that a single episode of high-intensity sound impairs hippocampal LTP, without impairing memory and learning. Our results show that the hippocampus is very responsive to loud sounds which can have a potential, but not yet identified, impact on its function.

  7. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  8. English Orthographic Learning in Chinese-L1 Young EFL Beginners.

    Science.gov (United States)

    Cheng, Yu-Lin

    2017-12-01

    English orthographic learning, among Chinese-L1 children who were beginning to learn English as a foreign language, was documented when: (1) only visual memory was at their disposal, (2) visual memory and either some letter-sound knowledge or some semantic information was available, and (3) visual memory, some letter-sound knowledge and some semantic information were all available. When only visual memory was available, orthographic learning (measured via an orthographic choice test) was meagre. Orthographic learning was significant when either semantic information or letter-sound knowledge supplemented visual memory, with letter-sound knowledge generating greater significance. Although the results suggest that letter-sound knowledge plays a more important role than semantic information, letter-sound knowledge alone does not suffice to achieve perfect orthographic learning, as orthographic learning was greatest when letter-sound knowledge and semantic information were both available. The present findings are congruent with a view that the orthography of a foreign language drives its orthographic learning more than L1 orthographic learning experience, thus extending Share's (Cognition 55:151-218, 1995) self-teaching hypothesis to include non-alphabetic L1 children's orthographic learning of an alphabetic foreign language. The little letter-sound knowledge development observed in the experiment-I control group indicates that very little letter-sound knowledge develops in the absence of dedicated letter-sound training. Given the important role of letter-sound knowledge in English orthographic learning, dedicated letter-sound instruction is highly recommended.

  9. Records for learning

    DEFF Research Database (Denmark)

    Binder, Thomas

    2005-01-01

    The article present and discuss findings from a participatory development of new learning practices among intensive care nurses, with an emphasize on the role of place making in informal learning activities.......The article present and discuss findings from a participatory development of new learning practices among intensive care nurses, with an emphasize on the role of place making in informal learning activities....

  10. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  11. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  12. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  13. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  14. Urban Noise Recorded by Stationary Monitoring Stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek; Dekýš, Vladimir

    2017-10-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads in the town in the direction of Łódź and Lublin. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The agreement of the distribution of the variable under analysis with normal distribution was evaluated. It was demonstrated that in most cases (for both roads) there was sufficient evidence to reject the null hypothesis at the significance level of 0.05. It was noted that compared with Łódź Road, in the case of Lublin Road data, more cases were recorded for which the null hypothesis could not be rejected. Uncertainties of the equivalent sound level measurements were compared within the periods under analysis. The standard deviation, coefficient of variation, the positional coefficient of variation, the quartile deviation was proposed for performing a comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes and time intervals. The differences concerned the values of uncertainties and coefficients of variation of the equivalent sound levels.

  15. Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope.

    Science.gov (United States)

    Ching, Siok Siong; Tan, Yih Kai

    2012-09-07

    To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann(®) Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen

  16. A sound worth saving: acoustic characteristics of a massive fish spawning aggregation.

    Science.gov (United States)

    Erisman, Brad E; Rowell, Timothy J

    2017-12-01

    Group choruses of marine animals can produce extraordinarily loud sounds that markedly elevate levels of the ambient soundscape. We investigated sound production in the Gulf corvina ( Cynoscion othonopterus ), a soniferous marine fish with a unique reproductive behaviour threatened by overfishing, to compare with sounds produced by other marine animals. We coupled echosounder and hydrophone surveys to estimate the magnitude of the aggregation and sounds produced during spawning. We characterized individual calls and documented changes in the soundscape generated by the presence of as many as 1.5 million corvina within a spawning aggregation spanning distances up to 27 km. We show that calls by male corvina represent the loudest sounds recorded in a marine fish, and the spatio-temporal magnitude of their collective choruses are among the loudest animal sounds recorded in aquatic environments. While this wildlife spectacle is at great risk of disappearing due to overfishing, regional conservation efforts are focused on other endangered marine animals. © 2017 The Author(s).

  17. Copyright and Related Issues Relevant to Digital Preservation and Dissemination of Unpublished Pre-1972 Sound Recordings by Libraries and Archives. CLIR Publication No. 144

    Science.gov (United States)

    Besek, June M.

    2009-01-01

    This report addresses the question of what libraries and archives are legally empowered to do to preserve and make accessible for research their holdings of unpublished pre-1972 sound recordings. The report's author, June M. Besek, is executive director of the Kernochan Center for Law, Media and the Arts at Columbia Law School. Unpublished sound…

  18. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  19. Developing a reference of normal lung sounds in healthy Peruvian children.

    Science.gov (United States)

    Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William

    2014-10-01

    Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.

  20. The influence of meaning on the perception of speech sounds.

    Science.gov (United States)

    Kazanina, Nina; Phillips, Colin; Idsardi, William

    2006-07-25

    As part of knowledge of language, an adult speaker possesses information on which sounds are used in the language and on the distribution of these sounds in a multidimensional acoustic space. However, a speaker must know not only the sound categories of his language but also the functional significance of these categories, in particular, which sound contrasts are relevant for storing words in memory and which sound contrasts are not. Using magnetoencephalographic brain recordings with speakers of Russian and Korean, we demonstrate that a speaker's perceptual space, as reflected in early auditory brain responses, is shaped not only by bottom-up analysis of the distribution of sounds in his language but also by more abstract analysis of the functional significance of those sounds.

  1. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  2. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  3. US market. Sound below the line

    Energy Technology Data Exchange (ETDEWEB)

    Iken, Joern

    2012-07-01

    The American Wind Energy Association AWEA is publishing warnings almost daily. The lack of political support is endangering jobs. The year 2011 broke no records, but there was a sound plus in expansion figures. (orig.)

  4. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm

    OpenAIRE

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and...

  5. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  6. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  7. Kindergarteners' performance in a sound-symbol paradigm predicts early reading.

    Science.gov (United States)

    Horbach, Josefine; Scharke, Wolfgang; Cröll, Jennifer; Heim, Stefan; Günther, Thomas

    2015-11-01

    The current study examined the role of serial processing of newly learned sound-symbol associations in early reading acquisition. A computer-based sound-symbol paradigm (SSP) was administered to 243 children during their last year of kindergarten (T1), and their reading performance was assessed 1 year later in first grade (T2). Results showed that performance on the SSP measured before formal reading instruction was associated with later reading development. At T1, early readers performed significantly better than nonreaders in learning correspondences between sounds and symbols as well as in applying those correspondences in a serial manner. At T2, SSP performance measured at T1 was positively associated with reading performance. Importantly, serial application of newly learned correspondences at T1 explained unique variance in first-grade reading performance in nonreaders over and above other verbal predictors, including phonological awareness, verbal short-term memory, and rapid automatized naming. Consequently, the SSP provides a promising way to study aspects of reading in preliterate children. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  9. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    Science.gov (United States)

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  10. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research.

    Science.gov (United States)

    Lin, Yi; Fan, Ruolin; Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.

  11. Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children

    Science.gov (United States)

    Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James

    2018-01-01

    Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262

  12. Basic live sound reinforcement a practical guide for starting live audio

    CERN Document Server

    Biederman, Raven

    2013-01-01

    Access and interpret manufacturer spec information, find shortcuts for plotting measure and test equations, and learn how to begin your journey towards becoming a live sound professional. Land and perform your first live sound gigs with this guide that gives you just the right amount of information. Don't get bogged down in details intended for complex and expensive equipment and Madison Square Garden-sized venues. Basic Live Sound Reinforcement is a handbook for audio engineers and live sound enthusiasts performing in small venues from one-mike coffee shops to clubs. With their combined ye

  13. Redesigning Space for Interdisciplinary Connections: The Puget Sound Science Center

    Science.gov (United States)

    DeMarais, Alyce; Narum, Jeanne L.; Wolfson, Adele J.

    2013-01-01

    Mindful design of learning spaces can provide an avenue for supporting student engagement in STEM subjects. Thoughtful planning and wide participation in the design process were key in shaping new and renovated spaces for the STEM community at the University of Puget Sound. The finished project incorporated Puget Sound's mission and goals as well…

  14. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  15. Songbirds use pulse tone register in two voices to generate low-frequency sound

    DEFF Research Database (Denmark)

    Jensen, Kenneth Kragh; Cooper, Brenton G.; Larsen, Ole Næsbye

    2007-01-01

    , the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse...... generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously...

  16. New Applications of Learning Machines

    DEFF Research Database (Denmark)

    Larsen, Jan

    * Machine learning framework for sound search * Genre classification * Music separation * MIMO channel estimation and symbol detection......* Machine learning framework for sound search * Genre classification * Music separation * MIMO channel estimation and symbol detection...

  17. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  18. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986

  19. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Directory of Open Access Journals (Sweden)

    Wendy Doubé

    2018-04-01

    Full Text Available Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  20. Misconceptions About Sound Among Engineering Students

    Science.gov (United States)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  1. 75 FR 16377 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2010-04-01

    ...). Petitions to Participate were received from: Intercollegiate Broadcast System, Inc./ Harvard Radio...), respectively, and the references to January 1, 2009, have been deleted. Next, for the reasons stated above in... State. (j) Retention of records. Books and records of a Broadcaster and of the Collective relating to...

  2. Composing Sound Identity in Taiko Drumming

    Science.gov (United States)

    Powell, Kimberly A.

    2012-01-01

    Although sociocultural theories emphasize the mutually constitutive nature of persons, activity, and environment, little attention has been paid to environmental features organized across sensory dimensions. I examine sound as a dimension of learning and practice, an organizing presence that connects the sonic with the social. This ethnographic…

  3. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  4. 75 FR 14074 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-03-24

    ...). The additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound... Sec. 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in... payments to the Collective, late fees, statements of account, audit and verification of royalty payments...

  5. 12 CFR 1732.7 - Record hold.

    Science.gov (United States)

    2010-01-01

    ... Banking OFFICE OF FEDERAL HOUSING ENTERPRISE OVERSIGHT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SAFETY AND SOUNDNESS RECORD RETENTION Record Retention Program § 1732.7 Record hold. (a) Definition. For... Enterprise or OFHEO that the Enterprise is to retain records relating to a particular issue in connection...

  6. Wearable Eating Habit Sensing System Using Internal Body Sound

    Science.gov (United States)

    Shuzo, Masaki; Komori, Shintaro; Takashima, Tomoko; Lopez, Guillaume; Tatsuta, Seiji; Yanagimoto, Shintaro; Warisawa, Shin'ichi; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of eating habits could be useful in preventing lifestyle diseases such as metabolic syndrome. Conventional methods consist of self-reporting and calculating mastication frequency based on the myoelectric potential of the masseter muscle. Both these methods are significant burdens for the user. We developed a non-invasive, wearable sensing system that can record eating habits over a long period of time in daily life. Our sensing system is composed of two bone conduction microphones placed in the ears that send internal body sound data to a portable IC recorder. Applying frequency spectrum analysis on the collected sound data, we could not only count the number of mastications during eating, but also accurately differentiate between eating, drinking, and speaking activities. This information can be used to evaluate the regularity of meals. Moreover, we were able to analyze sound features to classify the types of foods eaten by food texture.

  7. Learning Media Application Based On Microcontroller Chip Technology In Early Age

    Science.gov (United States)

    Ika Hidayati, Permata

    2018-04-01

    In Early childhood cognitive intelligence need right rncdia learning that can help a child’s cognitive intelligence quickly. The purpose of this study to design a learning media in the form of a puppet can used to introduce human anatomy during early childhood. This educational doll utilizing voice recognition technology from EasyVR module to receive commands from the user to introduce body parts on a doll, is used as an indicator TED. In addition to providing the introduction of human anatomy, this dolljut. a user can give a shout out to mainly play previously stored voice module sound recorder. Results obtained from this study is that this educational dolls can detect more than voice and spoken commands that can be random detected. Distance concrete of this doll in detecting the sound is up to a distance of 2.5 meters.

  8. Variability of road traffic noise recorded by stationary monitoring stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek

    2017-11-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads out of the town in the direction of Kraków and Warszawa. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The coefficient of variation and positional variation were proposed for performing comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes. The differences concerned the values of coefficients of variation of the equivalent sound levels.

  9. Classification of pulmonary pathology from breath sounds using the wavelet packet transform and an extreme learning machine.

    Science.gov (United States)

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S

    2017-06-08

    Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.

  10. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae: first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    Directory of Open Access Journals (Sweden)

    Kéver Loïc

    2012-12-01

    Full Text Available Abstract Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms, and never exceeded a few dozen milliseconds (18 ± 11 ms. Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of

  11. James Weldon Johnson and the Speech Lab Recordings

    Directory of Open Access Journals (Sweden)

    Chris Mustazza

    2016-03-01

    Full Text Available On December 24, 1935, James Weldon Johnson read thirteen of his poems at Columbia University, in a recording session engineered by Columbia Professor of Speech George W. Hibbitt and Barnard colleague Professor W. Cabell Greet, pioneers in the field that became sociolinguistics. Interested in American dialects, Greet and Hibbitt used early sound recording technologies to preserve dialect samples. In the same lab where they recorded T.S. Eliot, Gertrude Stein, and others, James Weldon Johnson read a selection of poems that included several from his seminal collection God’s Trombones and some dialect poems. Mustazza has digitized these and made them publicly available in the PennSound archive. In this essay, Mustazza contextualizes the collection, considering the recordings as sonic inscriptions alongside their textual manifestations. He argues that the collection must be heard within the frames of its production conditions—especially its recording in a speech lab—and that the sound recordings are essential elements in an hermeneutic analysis of the poems. He reasons that the poems’ original topics are reframed and refocused when historicized and contextualized within the frame of The Speech Lab Recordings.

  12. The Art and Science of Acoustic Recording: Re-enacting Arthur Nikisch and the Berlin Philharmonic Orchestra’s landmark 1913 recording of Beethoven’s Fifth Symphony

    Directory of Open Access Journals (Sweden)

    Dr Aleks Kolkowski

    2015-05-01

    Full Text Available The Art and Science of Acoustic Recording was a collaborative project between the Royal College of Music and the Science Museum that saw an historic orchestral recording from 1913 re-enacted by musicians, researchers and sound engineers at the Royal College of Music (RCM in 2014. The original recording was an early attempt to capture the sound of a large orchestra without re-scoring or substituting instruments and represents a step towards phonographic realism. Using replicated recording technology, media and techniques of the period, the re-enactment recorded two movements of Beethoven’s Fifth Symphony on to wax discs – the first orchestral acoustic recordings made since 1925. The aims were primarily to investigate the processes and practices of acoustic sound recording, developed largely through tacit knowledge, and to derive insights into the musicians’ experience of recording acoustically. Furthermore, the project sought to discover what the acoustic recordings of the past do – and don't – communicate to listeners today. Archival sources, historic apparatus and early photographic evidence served as groundwork for the re-enactment and guided its methodology, while the construction of replicas, wax manufacture and sound engineering were carried out by an expert in the field of acoustic recording. The wax recordings were digitised and some processed to produce disc copies playable on gramophone, thus replicating the entire course of recording, processing, duplication and reproduction. It is suggested that the project has contributed to a deeper understanding of early recordings and has provided a basis for further reconstructions of historical recording sessions.

  13. Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today's learning environment?

    Science.gov (United States)

    Seif, Gretchen A; Brown, Debora

    2013-01-01

    It is difficult to provide real-world learning experiences for students to master clinical and communication skills. The purpose of this paper is to describe a novel instructional method using self- and peer-assessment, reflection, and technology to help students develop effective interpersonal and clinical skills. The teaching method is described by the constructivist learning theory and incorporates the use of educational technology. The learning activities were incorporated into the pre-clinical didactic curriculum. The students participated in two video-recording assignments and performed self-assessments on each and had a peer-assessment on the second video-recording. The learning activity was evaluated through the self- and peer-assessments and an instructor-designed survey. This evaluation identified several themes related to the assignment, student performance, clinical behaviors and establishing rapport. Overall the students perceived that the learning activities assisted in the development of clinical and communication skills prior to direct patient care. The use of video recordings of a simulated history and examination is a unique learning activity for preclinical PT students in the development of clinical and communication skills.

  14. The influence of video recordings on beginning therapists’ learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  15. Towards parameter-free classification of sound effects in movies

    Science.gov (United States)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  16. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Role of sound stimulation in reprogramming brain connectivity.

    Science.gov (United States)

    Chaudhury, Sraboni; Nag, Tapas C; Jain, Suman; Wadhwa, Shashi

    2013-09-01

    Sensory stimulation has a critical role to play in the development of an individual. Environmental factors tend to modify the inputs received by the sensory pathway. The developing brain is most vulnerable to these alterations and interacts with the environment to modify its neural circuitry. In addition to other sensory stimuli, auditory stimulation can also act as external stimuli to provide enrichment during the perinatal period. There is evidence that suggests that enriched environment in the form of auditory stimulation can play a substantial role in modulating plasticity during the prenatal period. This review focuses on the emerging role of prenatal auditory stimulation in the development of higher brain functions such as learning and memory in birds and mammals. The molecular mechanisms of various changes in the hippocampus following sound stimulation to effect neurogenesis, learning and memory are described. Sound stimulation can also modify neural connectivity in the early postnatal life to enhance higher cognitive function or even repair the secondary damages in various neurological and psychiatric disorders. Thus, it becomes imperative to examine in detail the possible ameliorating effects of prenatal sound stimulation in existing animal models of various psychiatric disorders, such as autism.

  18. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    Full Text Available Terufumi Shimoda,1 Yasushi Obase,2 Yukio Nagasaka,3 Hiroshi Nakano,1 Akiko Ishimatsu,1 Reiko Kishikawa,1 Tomoaki Iwanaga1 1Clinical Research Center, Fukuoka National Hospital, Fukuoka, 2Second Department of Internal Medicine, School of Medicine, Nagasaki University, Nagasaki, 3Kyoto Respiratory Center, Otowa Hospital, Kyoto, Japan Purpose: Airway inflammation can be detected by lung sound analysis (LSA at a single point in the posterior lower lung field. We performed LSA at 7 points to examine whether the technique could identify the location of airway inflammation in patients with asthma. Patients and methods: Breath sounds were recorded at 7 points on the body surface of 22 asthmatic subjects. Inspiration sound pressure level (ISPL, expiration sound pressure level (ESPL, and the expiration-to-inspiration sound pressure ratio (E/I were calculated in 6 frequency bands. The data were analyzed for potential correlation with spirometry, airway hyperresponsiveness (PC20, and fractional exhaled nitric oxide (FeNO. Results: The E/I data in the frequency range of 100–400 Hz (E/I low frequency [LF], E/I mid frequency [MF] were better correlated with the spirometry, PC20, and FeNO values than were the ISPL or ESPL data. The left anterior chest and left posterior lower recording positions were associated with the best correlations (forced expiratory volume in 1 second/forced vital capacity: r=–0.55 and r=–0.58; logPC20: r=–0.46 and r=–0.45; and FeNO: r=0.42 and r=0.46, respectively. The majority of asthmatic subjects with FeNO ≥70 ppb exhibited high E/I MF levels in all lung fields (excluding the trachea and V50%pred <80%, suggesting inflammation throughout the airway. Asthmatic subjects with FeNO <70 ppb showed high or low E/I MF levels depending on the recording position, indicating uneven airway inflammation. Conclusion: E/I LF and E/I MF are more useful LSA parameters for evaluating airway inflammation in bronchial asthma; 7-point lung

  19. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  20. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  1. The Development of Spelling-Sound Relationships in a Model of Phonological Reading.

    Science.gov (United States)

    Zorzi, Marco; Houghton, George; Butterworth, Brian

    1998-01-01

    Developmental aspects of spelling-to-sound mapping for English monosyllabic words are investigated with a simple two-layer network model using a simple, general learning rule. The model is trained on both regularly and irregularly spelled words but extracts regular spelling to sound relationships, which it can apply to new words. Training-related…

  2. System complexity and (im)possible sound changes

    NARCIS (Netherlands)

    Seinhorst, K.T.

    2016-01-01

    In the acquisition of phonological patterns, learners tend to considerably reduce the complexity of their input. This learning bias may also constrain the set of possible sound changes, which might be expected to contain only those changes that do not increase the complexity of the system. However,

  3. Combined Amplification and Sound Generation for Tinnitus: A Scoping Review.

    Science.gov (United States)

    Tutaj, Lindsey; Hoare, Derek J; Sereda, Magdalena

    In most cases, tinnitus is accompanied by some degree of hearing loss. Current tinnitus management guidelines recognize the importance of addressing hearing difficulties, with hearing aids being a common option. Sound therapy is the preferred mode of audiological tinnitus management in many countries, including in the United Kingdom. Combination instruments provide a further option for those with an aidable hearing loss, as they combine amplification with a sound generation option. The aims of this scoping review were to catalog the existing body of evidence on combined amplification and sound generation for tinnitus and consider opportunities for further research or evidence synthesis. A scoping review is a rigorous way to identify and review an established body of knowledge in the field for suggestive but not definitive findings and gaps in current knowledge. A wide variety of databases were used to ensure that all relevant records within the scope of this review were captured, including gray literature, conference proceedings, dissertations and theses, and peer-reviewed articles. Data were gathered using scoping review methodology and consisted of the following steps: (1) identifying potentially relevant records; (2) selecting relevant records; (3) extracting data; and (4) collating, summarizing, and reporting results. Searches using 20 different databases covered peer-reviewed and gray literature and returned 5959 records. After exclusion of duplicates and works that were out of scope, 89 records remained for further analysis. A large number of records identified varied considerably in methodology, applied management programs, and type of devices. There were significant differences in practice between different countries and clinics regarding candidature and fitting of combination aids, partly driven by the application of different management programs. Further studies on the use and effects of combined amplification and sound generation for tinnitus are

  4. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    Science.gov (United States)

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. SOUND VELOCITY and Other Data from USS STUMP DD-978) (NCEI Accession 9400069)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The sound velocity data in this accession were collected from USS STUMP DD-978 by US Navy. The sound velocity in water is analog profiles data that was recorded in...

  6. Physics and music the science of musical sound

    CERN Document Server

    White, Harvey E

    2014-01-01

    Comprehensive and accessible, this foundational text surveys general principles of sound, musical scales, characteristics of instruments, mechanical and electronic recording devices, and many other topics. More than 300 illustrations plus questions, problems, and projects.

  7. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  8. Lung function interpolation by analysis of means of neural-network-supported respiration sounds

    NARCIS (Netherlands)

    Oud, M

    Respiration sounds of individual asthmatic patients were analysed in the scope of the development of a method for computerised recognition of the degree of airways obstruction. Respiration sounds were recorded during laboratory sessions of allergen provoked airways obstruction, during several stages

  9. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  10. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    Science.gov (United States)

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  11. Multidimensionality of Teachers' Graded Responses for Preschoolers' Stylistic Learning Behavior: The Learning-to-Learn Scales

    Science.gov (United States)

    McDermott, Paul A.; Fantuzzo, John W.; Warley, Heather P.; Waterman, Clare; Angelo, Lauren E.; Gadsden, Vivian L.; Sekino, Yumiko

    2011-01-01

    Assessment of preschool learning behavior has become very popular as a mechanism to inform cognitive development and promote successful interventions. The most widely used measures offer sound predictions but distinguish only a few types of stylistic learning and lack sensitive growth detection. The Learning-to-Learn Scales was designed to…

  12. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  13. Modern recording techniques

    CERN Document Server

    Huber, David Miles

    2013-01-01

    As the most popular and authoritative guide to recording Modern Recording Techniques provides everything you need to master the tools and day to day practice of music recording and production. From room acoustics and running a session to mic placement and designing a studio Modern Recording Techniques will give you a really good grounding in the theory and industry practice. Expanded to include the latest digital audio technology the 7th edition now includes sections on podcasting, new surround sound formats and HD and audio.If you are just starting out or looking for a step up

  14. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  15. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    Science.gov (United States)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  16. Sound transmission reduction with intelligent panel systems

    Science.gov (United States)

    Fuller, Chris R.; Clark, Robert L.

    1992-01-01

    Experimental and theoretical investigations are performed of the use of intelligent panel systems to control the sound transmission and radiation. An intelligent structure is defined as a structural system with integrated actuators and sensors under the guidance of an adaptive, learning type controller. The system configuration is based on the Active Structural Acoustic Control (ASAC) concept where control inputs are applied directly to the structure to minimize an error quantity related to the radiated sound field. In this case multiple piezoelectric elements are employed as sensors. The importance of optimal shape and location is demonstrated to be of the same order of influence as increasing the number of channels of control.

  17. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    Science.gov (United States)

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  18. Science Education Using a Computer Model-Virtual Puget Sound

    Science.gov (United States)

    Fruland, R.; Winn, W.; Oppenheimer, P.; Stahr, F.; Sarason, C.

    2002-12-01

    We created an interactive learning environment based on an oceanographic computer model of Puget Sound-Virtual Puget Sound (VPS)-as an alternative to traditional teaching methods. Students immersed in this navigable 3-D virtual environment observed tidal movements and salinity changes, and performed tracer and buoyancy experiments. Scientific concepts were embedded in a goal-based scenario to locate a new sewage outfall in Puget Sound. Traditional science teaching methods focus on distilled representations of agreed-upon knowledge removed from real-world context and scientific debate. Our strategy leverages students' natural interest in their environment, provides meaningful context and engages students in scientific debate and knowledge creation. Results show that VPS provides a powerful learning environment, but highlights the need for research on how to most effectively represent concepts and organize interactions to support scientific inquiry and understanding. Research is also needed to ensure that new technologies and visualizations do not foster misconceptions, including the impression that the model represents reality rather than being a useful tool. In this presentation we review results from prior work with VPS and outline new work for a modeling partnership recently formed with funding from the National Ocean Partnership Program (NOPP).

  19. The influence of video recordings on beginning therapist’s learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    2010-01-01

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  20. Role of sound stimulation in reprogramming brain connectivity

    Indian Academy of Sciences (India)

    2013-07-17

    Jul 17, 2013 ... higher brain functions such as learning and memory in birds and mammals. ... Sound at an optimum level for a short period may act as an auditory stimulus to ... This could lead to long-term plasticity, which allows fine tuning to ...

  1. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  2. 75 FR 67777 - Copyright Office; Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Science.gov (United States)

    2010-11-03

    ... (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., spoken, or other sounds, but not including the sounds accompanying a motion picture or other audiovisual... general, Federal law is better defined, both as to the rights and the exceptions, and more consistent than...

  3. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  4. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  5. Comparison between uroflowmetry and sonouroflowmetry in recording of urinary flow in healthy men.

    Science.gov (United States)

    Krhut, Jan; Gärtner, Marcel; Sýkora, Radek; Hurtík, Petr; Burda, Michal; Luňáček, Libor; Zvarová, Katarína; Zvara, Peter

    2015-08-01

    To evaluate the accuracy of sonouroflowmetry in recording urinary flow parameters and voided volume. A total of 25 healthy male volunteers (age 18-63 years) were included in the study. All participants were asked to carry out uroflowmetry synchronous with recording of the sound generated by the urine stream hitting the water level in the urine collection receptacle, using a dedicated cell phone. From 188 recordings, 34 were excluded, because of voided volume Pearson's correlation coefficient was used to compare parameters recorded by uroflowmetry with those calculated based on sonouroflowmetry recordings. The flow pattern recorded by sonouroflowmetry showed a good correlation with the uroflowmetry trace. A strong correlation (Pearson's correlation coefficient 0.87) was documented between uroflowmetry-recorded flow time and duration of the sound signal recorded with sonouroflowmetry. A moderate correlation was observed in voided volume (Pearson's correlation coefficient 0.68) and average flow rate (Pearson's correlation coefficient 0.57). A weak correlation (Pearson's correlation coefficient 0.38) between maximum flow rate recorded using uroflowmetry and sonouroflowmetry-recorded peak sound intensity was documented. The present study shows that the basic concept utilizing sound analysis for estimation of urinary flow parameters and voided volume is valid. However, further development of this technology and standardization of recording algorithm are required. © 2015 The Japanese Urological Association.

  6. Xinyinqin: a computer-based heart sound simulator.

    Science.gov (United States)

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  7. ERPs recorded during early second language exposure predict syntactic learning.

    Science.gov (United States)

    Batterink, Laura; Neville, Helen J

    2014-09-01

    Millions of adults worldwide are faced with the task of learning a second language (L2). Understanding the neural mechanisms that support this learning process is an important area of scientific inquiry. However, most previous studies on the neural mechanisms underlying L2 acquisition have focused on characterizing the results of learning, relying upon end-state outcome measures in which learning is assessed after it has occurred, rather than on the learning process itself. In this study, we adopted a novel and more direct approach to investigate neural mechanisms engaged during L2 learning, in which we recorded ERPs from beginning adult learners as they were exposed to an unfamiliar L2 for the first time. Learners' proficiency in the L2 was then assessed behaviorally using a grammaticality judgment task, and ERP data acquired during initial L2 exposure were sorted as a function of performance on this task. High-proficiency learners showed a larger N100 effect to open-class content words compared with closed-class function words, whereas low-proficiency learners did not show a significant N100 difference between open- and closed-class words. In contrast, amplitude of the N400 word category effect correlated with learners' L2 comprehension, rather than predicting syntactic learning. Taken together, these results indicate that learners who spontaneously direct greater attention to open- rather than closed-class words when processing L2 input show better syntactic learning, suggesting a link between selective attention to open-class content words and acquisition of basic morphosyntactic rules. These findings highlight the importance of selective attention mechanisms for L2 acquisition.

  8. Pattern-Induced Covert Category Learning in Songbirds.

    Science.gov (United States)

    Comins, Jordan A; Gentner, Timothy Q

    2015-07-20

    Language is uniquely human, but its acquisition may involve cognitive capacities shared with other species. During development, language experience alters speech sound (phoneme) categorization. Newborn infants distinguish the phonemes in all languages but by 10 months show adult-like greater sensitivity to native language phonemic contrasts than non-native contrasts. Distributional theories account for phonetic learning by positing that infants infer category boundaries from modal distributions of speech sounds along acoustic continua. For example, tokens of the sounds /b/ and /p/ cluster around different mean voice onset times. To disambiguate overlapping distributions, contextual theories propose that phonetic category learning is informed by higher-level patterns (e.g., words) in which phonemes normally occur. For example, the vowel sounds /Ι/ and /e/ can occupy similar perceptual spaces but can be distinguished in the context of "with" and "well." Both distributional and contextual cues appear to function in speech acquisition. Non-human species also benefit from distributional cues for category learning, but whether category learning benefits from contextual information in non-human animals is unknown. The use of higher-level patterns to guide lower-level category learning may reflect uniquely human capacities tied to language acquisition or more general learning abilities reflecting shared neurobiological mechanisms. Using songbirds, European starlings, we show that higher-level pattern learning covertly enhances categorization of the natural communication sounds. This observation mirrors the support for contextual theories of phonemic category learning in humans and demonstrates a general form of learning not unique to humans or language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Records Management And Private Sector Organizations | Mnjama ...

    African Journals Online (AJOL)

    This article begins by examining the role of records management in private organizations. It identifies the major reason why organizations ought to manage their records effectively and efficiently. Its major emphasis is that a sound records management programme is a pre-requisite to quality management system programme ...

  10. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  11. Extracting meaning from audio signals - a machine learning approach

    DEFF Research Database (Denmark)

    Larsen, Jan

    2007-01-01

    * Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression......* Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression...

  12. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    Directory of Open Access Journals (Sweden)

    Gorka Fraga González

    2017-01-01

    Full Text Available We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for establishing and consolidating L-SS associations. Then we review brain potential studies, including our own, that yielded two markers associated with reading fluency. Here we show that the marker related to visual specialization (N170 predicts word and pseudoword reading fluency in children who received additional practice in the processing of morphological word structure. Conversely, L-SS integration (indexed by mismatch negativity (MMN may only remain important when direct orthography to semantic conversion is not possible, such as in pseudoword reading. In addition, the correlation between these two markers supports the notion that multisensory integration facilitates visual specialization. Finally, we review the role of implicit learning and executive functions in audiovisual learning in dyslexia. Implications for remedial research are discussed and suggestions for future studies are presented.

  13. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  14. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  15. Heart sounds analysis using probability assessment

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Filip; Viščor, Ivo; Halámek, Josef; Jurčo, Juraj; Jurák, Pavel

    2017-01-01

    Roč. 38, č. 8 (2017), s. 1685-1700 ISSN 0967-3334 R&D Projects: GA ČR GAP102/12/2034; GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : heart sounds * FFT * machine learning * signal averaging * probability assessment Subject RIV: FS - Medical Facilities ; Equipment OBOR OECD: Medical engineering Impact factor: 2.058, year: 2016

  16. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    Science.gov (United States)

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  18. Integration of strategy experiential learning in e-module of electronic records management

    Directory of Open Access Journals (Sweden)

    S. Sutirman

    2018-01-01

    Full Text Available This study aims to determine the effectiveness of e-module of electronic records management integrated with experiential learning strategies to improve student achievement in the domain of cognitive, psychomotor, and affective. This study is a research and development. Model research and development used is Web-Based Instructional Design (WBID developed by Davidson-Shivers and Rasmussen. The steps of research and development carried out by analysis, evaluation planning, concurrent design, implementation, and a summative evaluation. The approach used in this study consisted of qualitative and quantitative approaches. Collecting data used the Delphi technique, observation, documentation studies and tests. Research data analysis used qualitative analysis and quantitative analysis. Testing the effectiveness of the product used a quasi-experimental research design pretest-posttest non-equivalent control group. The results showed that the e-module of electronic records management integrated with experiential learning strategies can improve student achievement in the domain of cognitive, psychomotor, and affective.

  19. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    Science.gov (United States)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did

  20. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    Directory of Open Access Journals (Sweden)

    Nordahl Rolf

    2010-01-01

    Full Text Available We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users' actions, while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden. A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions, including both uni- and bimodal stimuli (auditory and visual. The auditory stimuli consisted of several combinations of auditory feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show that subjects' motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment.

  1. Effects of providing word sounds during printed word learning

    NARCIS (Netherlands)

    Reitsma, P.; Dongen, van A.J.N.; Custers, E.

    1984-01-01

    The purpose of this study was to explore the effects of the availability of the spoken sound of words along with the printed forms during reading practice. Firstgrade children from two normal elementary schools practised reading several unfamiliar words in print. For half of the printed words the

  2. Computerised Analysis of Telemonitored Respiratory Sounds for Predicting Acute Exacerbations of COPD.

    Science.gov (United States)

    Fernandez-Granero, Miguel Angel; Sanchez-Morillo, Daniel; Leon-Jimenez, Antonio

    2015-10-23

    Chronic obstructive pulmonary disease (COPD) is one of the commonest causes of death in the world and poses a substantial burden on healthcare systems and patients' quality of life. The largest component of the related healthcare costs is attributable to admissions due to acute exacerbation (AECOPD). The evidence that might support the effectiveness of the telemonitoring interventions in COPD is limited partially due to the lack of useful predictors for the early detection of AECOPD. Electronic stethoscopes and computerised analyses of respiratory sounds (CARS) techniques provide an opportunity for substantial improvement in the management of respiratory diseases. This exploratory study aimed to evaluate the feasibility of using: (a) a respiratory sensor embedded in a self-tailored housing for ageing users; (b) a telehealth framework; (c) CARS and (d) machine learning techniques for the remote early detection of the AECOPD. In a 6-month pilot study, 16 patients with COPD were equipped with a home base-station and a sensor to daily record their respiratory sounds. Principal component analysis (PCA) and a support vector machine (SVM) classifier was designed to predict AECOPD. 75.8% exacerbations were early detected with an average of 5 ± 1.9 days in advance at medical attention. The proposed method could provide support to patients, physicians and healthcare systems.

  3. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  4. Are binaural recordings needed for subjective and objective annoyance assessment of traffic noise?

    DEFF Research Database (Denmark)

    Rodríguez, Estefanía Cortés; Song, Wookeun; Brunskog, Jonas

    2011-01-01

    Humans are annoyed when they are exposed to environmental noise. Traditional measures such as sound pressure levels may not correlate well with how humans perceive annoyance, therefore it is important to investigate psychoacoustic metrics that may correlate better with the perceived annoyance...... of environmental noise than the A-weighted equivalent sound pressure level. This study examined whether the use of binaural recordings of sound events improves the correlation between the objective metrics and the perceived annoyance, particularly for road traffic noise. Metrics based on measurement with a single...... microphone and on binaural sound field recordings have been examined and compared. In order to acquire data for the subjective perception of annoyance, a series of listening tests has been carried out. It is concluded that binaural loudness metrics from binaural recordings are better correlated...

  5. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity.

    Science.gov (United States)

    Buxton, Rachel; McKenna, Megan F; Clapp, Mary; Meyer, Erik; Stabenau, Erik; Angeloni, Lisa M; Crooks, Kevin; Wittemyer, George

    2018-04-20

    Passive acoustic monitoring has the potential to be a powerful approach for assessing biodiversity across large spatial and temporal scales. However, extracting meaningful information from recordings can be prohibitively time consuming. Acoustic indices offer a relatively rapid method for processing acoustic data and are increasingly used to characterize biological communities. We examine the ability of acoustic indices to predict the diversity and abundance of biological sounds within recordings. First we reviewed the acoustic index literature and found that over 60 indices have been applied to a range of objectives with varying success. We then implemented a subset of the most successful indices on acoustic data collected at 43 sites in temperate terrestrial and tropical marine habitats across the continental U.S., developing a predictive model of the diversity of animal sounds observed in recordings. For terrestrial recordings, random forest models using a suite of acoustic indices as covariates predicted Shannon diversity, richness, and total number of biological sounds with high accuracy (R 2 > = 0.94, mean squared error MSE indices assessed, roughness, acoustic activity, and acoustic richness contributed most to the predictive ability of models. Performance of index models was negatively impacted by insect, weather, and anthropogenic sounds. For marine recordings, random forest models predicted Shannon diversity, richness, and total number of biological sounds with low accuracy (R 2 = 195), indicating that alternative methods are necessary in marine habitats. Our results suggest that using a combination of relevant indices in a flexible model can accurately predict the diversity of biological sounds in temperate terrestrial acoustic recordings. Thus, acoustic approaches could be an important contribution to biodiversity monitoring in some habitats in the face of accelerating human-caused ecological change. This article is protected by copyright. All rights

  6. Understanding and crafting the mix the art of recording

    CERN Document Server

    Moylan, William

    2014-01-01

    Understanding and Crafting the Mix, 3rd edition provides the framework to identify, evaluate, and shape your recordings with clear and systematic methods. Featuring numerous exercises, this third edition allows you to develop critical listening and analytical skills to gain greater control over the quality of your recordings. Sample production sequences and descriptions of the recording engineer's role as composer, conductor, and performer provide you with a clear view of the entire recording process. Dr. William Moylan takes an inside look into a range of iconic popular music, thus offering insights into making meaningful sound judgments during recording. His unique focus on the aesthetic of recording and mixing will allow you to immediately and artfully apply his expertise while at the mixing desk. A companion website features recorded tracks to use in exercises, reference materials, additional examples of mixes and sound qualities, and mixed tracks.

  7. Measurement and classification of heart and lung sounds by using LabView for educational use.

    Science.gov (United States)

    Altrabsheh, B

    2010-01-01

    This study presents the design, development and implementation of a simple low-cost method of phonocardiography signal detection. Human heart and lung signals are detected by using a simple microphone through a personal computer; the signals are recorded and analysed using LabView software. Amplitude and frequency analyses are carried out for various phonocardiography pathological cases. Methods for automatic classification of normal and abnormal heart sounds, murmurs and lung sounds are presented. Various cases of heart and lung sound measurement are recorded and analysed. The measurements can be saved for further analysis. The method in this study can be used by doctors as a detection tool aid and may be useful for teaching purposes at medical and nursing schools.

  8. The insider's guide to home recording record music and get paid

    CERN Document Server

    Tarquin, Brian

    2015-01-01

    The last decade has seen an explosion in the number of home-recording studios. With the mass availability of sophisticated technology, there has never been a better time to do it yourself and make a profit.Take a studio journey with Brian Tarquin, the multiple-Emmy-award winning recording artist and producer, as he leads you through the complete recording process, and shows you how to perfect your sound using home equipment. He guides you through the steps to increase your creative freedom, and offers numerous tips to improve the effectiveness of your workflow. Topics covered in this book incl

  9. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  10. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  11. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  12. Computerized Respiratory Sounds: Novel Outcomes for Pulmonary Rehabilitation in COPD.

    Science.gov (United States)

    Jácome, Cristina; Marques, Alda

    2017-02-01

    Computerized respiratory sounds are a simple and noninvasive measure to assess lung function. Nevertheless, their potential to detect changes after pulmonary rehabilitation (PR) is unknown and needs clarification if respiratory acoustics are to be used in clinical practice. Thus, this study investigated the short- and mid-term effects of PR on computerized respiratory sounds in subjects with COPD. Forty-one subjects with COPD completed a 12-week PR program and a 3-month follow-up. Secondary outcome measures included dyspnea, self-reported sputum, FEV 1 , exercise tolerance, self-reported physical activity, health-related quality of life, and peripheral muscle strength. Computerized respiratory sounds, the primary outcomes, were recorded at right/left posterior chest using 2 stethoscopes. Air flow was recorded with a pneumotachograph. Normal respiratory sounds, crackles, and wheezes were analyzed with validated algorithms. There was a significant effect over time in all secondary outcomes, with the exception of FEV 1 and of the impact domain of the St George Respiratory Questionnaire. Inspiratory and expiratory median frequencies of normal respiratory sounds in the 100-300 Hz band were significantly lower immediately (-2.3 Hz [95% CI -4 to -0.7] and -1.9 Hz [95% CI -3.3 to -0.5]) and at 3 months (-2.1 Hz [95% CI -3.6 to -0.7] and -2 Hz [95% CI -3.6 to -0.5]) post-PR. The mean number of expiratory crackles (-0.8, 95% CI -1.3 to -0.3) and inspiratory wheeze occupation rate (median 5.9 vs 0) were significantly lower immediately post-PR. Computerized respiratory sounds were sensitive to short- and mid-term effects of PR in subjects with COPD. These findings are encouraging for the clinical use of respiratory acoustics. Future research is needed to strengthen these findings and explore the potential of computerized respiratory sounds to assess the effectiveness of other clinical interventions in COPD. Copyright © 2017 by Daedalus Enterprises.

  13. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  14. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    Science.gov (United States)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  15. Artificial neural networks for breathing and snoring episode detection in sleep sounds

    International Nuclear Information System (INIS)

    Emoto, Takahiro; Akutagawa, Masatake; Kinouchi, Yohsuke; Abeyratne, Udantha R; Chen, Yongjian; Kawata, Ikuji

    2012-01-01

    Obstructive sleep apnea (OSA) is a serious disorder characterized by intermittent events of upper airway collapse during sleep. Snoring is the most common nocturnal symptom of OSA. Almost all OSA patients snore, but not all snorers have the disease. Recently, researchers have attempted to develop automated snore analysis technology for the purpose of OSA diagnosis. These technologies commonly require, as the first step, the automated identification of snore/breathing episodes (SBE) in sleep sound recordings. Snore intensity may occupy a wide dynamic range (>95 dB) spanning from the barely audible to loud sounds. Low-intensity SBE sounds are sometimes seen buried within the background noise floor, even in high-fidelity sound recordings made within a sleep laboratory. The complexity of SBE sounds makes it a challenging task to develop automated snore segmentation algorithms, especially in the presence of background noise. In this paper, we propose a fundamentally novel approach based on artificial neural network (ANN) technology to detect SBEs. Working on clinical data, we show that the proposed method can detect SBE at a sensitivity and specificity exceeding 0.892 and 0.874 respectively, even when the signal is completely buried in background noise (SNR <0 dB). We compare the performance of the proposed technology with those of the existing methods (short-term energy, zero-crossing rates) and illustrate that the proposed method vastly outperforms conventional techniques. (paper)

  16. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  17. Evoked responses to sinusoidally modulated sound in unanaesthetized dogs

    NARCIS (Netherlands)

    Tielen, A.M.; Kamp, A.; Lopes da Silva, F.H.; Reneau, J.P.; Storm van Leeuwen, W.

    1. 1. Responses evoked by sinusoidally amplitude-modulated sound in unanaesthetized dogs have been recorded from inferior colliculus and from auditory cortex structures by means of chronically indwelling stainless steel wire electrodes. 2. 2. Harmonic analysis of the average responses demonstrated

  18. Vehicle engine sound design based on an active noise control system

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, M. [Siemens VDO Automotive, Auburn Hills, MI (United States)

    2002-07-01

    A study has been carried out to identify the types of vehicle engine sounds that drivers prefer while driving at different locations and under different driving conditions. An active noise control system controlled the sound at the air intake orifice of a vehicle engine's first sixteen orders and half orders. The active noise control system was used to change the engine sound to quiet, harmonic, high harmonic, spectral shaped and growl. Videos were made of the roads traversed, binaural recording of vehicle interior sounds, and vibrations of the vehicle floor pan. Jury tapes were made up for day driving, nighttime driving and driving in the rain during the day for each of the sites. Jurors used paired comparisons to evaluate the vehicle interior sounds while sitting in a vehicle simulator developed by Siemens VDO that replicated videos of the road traversed, binaural recording of the vehicle interior sounds and vibrations of the floor pan and seat. (orig.) [German] Im Rahmen einer Studie wurden Typen von Motorgeraeuschen identifiziert, die von Fahrern unter verschiedenen Fahrbedingungen als angenehm empfunden werden. Ein System zur aktiven Geraeuschbeeinflussung am Ansauglufteinlass im Bereich des Luftfilters modifizierte den Klang des Motors bis zur 16,5ten Motorordnung, und zwar durch Bedaempfung, Verstaerkung und Filterung der Signalfrequenzen. Waehrend der Fahrt wurden Videoaufnahmen der befahrenen Strassen, Stereoaufnahmen der Fahrzeuginnengeraeusche und Aufnahmen der Vibrationsamplituden des Fahrzeugbodens erstellt; dies bei Tag- und Nachtfahrten und bei Tagfahrten im Regen. Zur Beurteilung der aufgezeichneten Geraeusche durch Versuchspersonen wurde ein Fahrzeug-Laborsimulator mit Fahrersitz, Bildschirm, Lautsprecher und mechanischer Erregung der Bodenplatte aufgebaut, um die aufgenommenen Signale moeglichst wirklichkeitsgetreu wiederzugeben. (orig.)

  19. Understanding the Doppler effect by analysing spectrograms of the sound of a passing vehicle

    Science.gov (United States)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-11-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a classroom, both theoretically and experimentally, to deepen students’ understanding of the Doppler effect. Included are our own experimental data (48 sound recordings) to allow others to reproduce the analysis, if they cannot repeat the whole experiment themselves. In addition to its educational purpose, this paper examines the percentage errors in our results. This enabled us to determine sources of error, allowing those conducting similar future investigations to optimize their accuracy.

  20. Role of Head Teachers in Ensuring Sound Climate

    Science.gov (United States)

    Kor, Jacob; Opare, James K.

    2017-01-01

    The school climate is outlined in literature as one of the most important within school factors required for effective teaching in learning. As leaders in any organisations are assigned the role of ensuring sound climates for work, head teachers also have the task of creating and maintaining an environment conducive for effective academic work…

  1. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    Directory of Open Access Journals (Sweden)

    Mari eTervaniemi

    2014-07-01

    Full Text Available Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel to compare memory-related MMN and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians. In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  2. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.

    Science.gov (United States)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  3. Using science soundly: The Yucca Mountain standard

    International Nuclear Information System (INIS)

    Fri, R.W.

    1995-01-01

    Using sound science to shape government regulation is one of the most hotly argued topics in the ongoing debate about regulatory reform. Even though no one advaocates using unsound science, the belief that even the best science will sweep away regulatory controversy is equally foolish. As chair of a National Research Council (NRC) committee that studied the scientific basis for regulating high-level nuclear waste disposal, the author learned that science alone could resolve few of the key regulatory questions. Developing a standard that specifies a socially acceptable limit on the human health effects of nuclear waste releases involves many decisions. As the NRC committee learned in evaluating the scientific basis for the Yucca Mountain standard, a scientifically best decision rarely exists. More often, science can only offer a useful framework and starting point for policy debates. And sometimes, science's most helpful contribution is to admit that it has nothing to say. The Yucca mountain study clearly illustrates that excessive faith in the power of science is more likely to produce messy frustration than crisp decisions. A better goal for regulatory reform is the sound use of science to clarify and contain the inevitable policy controversy

  4. The effect of sound sources on soundscape appraisal

    NARCIS (Netherlands)

    van den Bosch, Kirsten; Andringa, Tjeerd

    2014-01-01

    In this paper we explore how the perception of sound sources (like traffic, birds, and the presence of distant people) influences the appraisal of soundscapes (as calm, lively, chaotic, or boring). We have used 60 one-minute recordings, selected from 21 days (502 hours) in March and July 2010.

  5. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  6. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  7. Memory for pictures and sounds: independence of auditory and visual codes.

    Science.gov (United States)

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  8. Graphic recording of heart sounds in height native subjects

    OpenAIRE

    Rotta, Andrés; Ascenzo C., Jorge

    2014-01-01

    The phonocardiograms series obtained from normal subjects show that it is not always possible to record the noises Headset and 3rd, giving diverse enrollment rates by different authors. The reason why the graphic registration fails these noises largely normal individuals has not yet been explained in concrete terms, but allowed different influencing factors such as age, determinants of noises, terms of transmissibility chest wall sensitivity of the recording apparatus, etc. Los fonocardiog...

  9. Video and Sound Production: Flip out! Game on!

    Science.gov (United States)

    Hunt, Marc W.

    2013-01-01

    The author started teaching TV and sound production in a career and technical education (CTE) setting six years ago. The first couple months of teaching provided a steep learning curve for him. He is highly experienced in his industry, but teaching the content presented a new set of obstacles. His students had a broad range of abilities,…

  10. Automatic Segmentation and Deep Learning of Bird Sounds

    NARCIS (Netherlands)

    Koops, Hendrik Vincent; Van Balen, J.M.H.; Wiering, F.

    2015-01-01

    We present a study on automatic birdsong recognition with deep neural networks using the BIRDCLEF2014 dataset. Through deep learning, feature hierarchies are learned that represent the data on several levels of abstraction. Deep learning has been applied with success to problems in fields such as

  11. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  12. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  13. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    Science.gov (United States)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  14. Software for objective comparison of vocal acoustic features over weeks of audio recording: KLFromRecordingDays

    Science.gov (United States)

    Soderstrom, Ken; Alalawi, Ali

    KLFromRecordingDays allows measurement of Kullback-Leibler (KL) distances between 2D probability distributions of vocal acoustic features. Greater KL distance measures reflect increased phonological divergence across the vocalizations compared. The software has been used to compare *.wav file recordings made by Sound Analysis Recorder 2011 of songbird vocalizations pre- and post-drug and surgical manipulations. Recordings from individual animals in *.wav format are first organized into subdirectories by recording day and then segmented into individual syllables uttered and acoustic features of these syllables using Sound Analysis Pro 2011 (SAP). KLFromRecordingDays uses syllable acoustic feature data output by SAP to a MySQL table to generate and compare "template" (typically pre-treatment) and "target" (typically post-treatment) probability distributions. These distributions are a series of virtual 2D plots of the duration of each syllable (as x-axis) to each of 13 other acoustic features measured by SAP for that syllable (as y-axes). Differences between "template" and "target" probability distributions for each acoustic feature are determined by calculating KL distance, a measure of divergence of the target 2D distribution pattern from that of the template. KL distances and the mean KL distance across all acoustic features are calculated for each recording day and output to an Excel spreadsheet. Resulting data for individual subjects may then be pooled across treatment groups and graphically summarized and used for statistical comparisons. Because SAP-generated MySQL files are accessed directly, data limits associated with spreadsheet output are avoided, and the totality of vocal output over weeks may be objectively analyzed all at once. The software has been useful for measuring drug effects on songbird vocalizations and assessing recovery from damage to regions of vocal motor cortex. It may be useful in studies employing other species, and as part of speech

  15. Software for objective comparison of vocal acoustic features over weeks of audio recording: KLFromRecordingDays

    Directory of Open Access Journals (Sweden)

    Ken Soderstrom

    2017-01-01

    Full Text Available KLFromRecordingDays allows measurement of Kullback–Leibler (KL distances between 2D probability distributions of vocal acoustic features. Greater KL distance measures reflect increased phonological divergence across the vocalizations compared. The software has been used to compare *.wav file recordings made by Sound Analysis Recorder 2011 of songbird vocalizations pre- and post-drug and surgical manipulations. Recordings from individual animals in *.wav format are first organized into subdirectories by recording day and then segmented into individual syllables uttered and acoustic features of these syllables using Sound Analysis Pro 2011 (SAP. KLFromRecordingDays uses syllable acoustic feature data output by SAP to a MySQL table to generate and compare “template” (typically pre-treatment and “target” (typically post-treatment probability distributions. These distributions are a series of virtual 2D plots of the duration of each syllable (as x-axis to each of 13 other acoustic features measured by SAP for that syllable (as y-axes. Differences between “template” and “target” probability distributions for each acoustic feature are determined by calculating KL distance, a measure of divergence of the target 2D distribution pattern from that of the template. KL distances and the mean KL distance across all acoustic features are calculated for each recording day and output to an Excel spreadsheet. Resulting data for individual subjects may then be pooled across treatment groups and graphically summarized and used for statistical comparisons. Because SAP-generated MySQL files are accessed directly, data limits associated with spreadsheet output are avoided, and the totality of vocal output over weeks may be objectively analyzed all at once. The software has been useful for measuring drug effects on songbird vocalizations and assessing recovery from damage to regions of vocal motor cortex. It may be useful in studies employing other

  16. Developing the STS sound pollution unit for enhancing students' applying knowledge among science technology engineering and mathematics

    Science.gov (United States)

    Jumpatong, Sutthaya; Yuenyong, Chokchai

    2018-01-01

    STEM education suggested that students should be enhanced to learn science with integration between Science, Technology, Engineering and Mathematics. To help Thai students make sense of relationship between Science, Technology, Engineering and Mathematics, this paper presents learning activities of STS Sound Pollution. The developing of STS Sound Pollution is a part of research that aimed to enhance students' perception of the relationship between Science Technology Engineering and Mathematics. This paper will discuss how to develop Sound Pollution through STS approach in framework of Yuenyong (2006) where learning activities were provided based on 5 stages. These included (1) identification of social issues, (2) identification of potential solutions, (3) need for knowledge, (4) decisionmaking, and (5) socialization stage. The learning activities could be highlighted as following. First stage, we use video clip of `Problem of people about Sound Pollution'. Second stage, students will need to identification of potential solutions by design Home/Factory without noisy. The need of scientific and other knowledge will be proposed for various alternative solutions. Third stage, students will gain their scientific knowledge through laboratory and demonstration of sound wave. Fourth stage, students have to make decision for the best solution of designing safety Home/Factory based on their scientific knowledge and others (e.g. mathematics, economics, art, value, and so on). Finally, students will present and share their Design Safety Home/Factory in society (e.g. social media or exhibition) in order to validate their ideas and redesigning. The paper, then, will discuss how those activities would allow students' applying knowledge of science technology engineering, mathematics and others (art, culture and value) for their possible solution of the STS issues.

  17. A consideration on physical tuning for acoustical coloration in recording studio

    Science.gov (United States)

    Shimizu, Yasushi

    2003-04-01

    Coloration due to particular architectural shapes and dimension or less surface absorption has been mentioned as an acoustical defect in recording studio. Generally interference among early reflected sounds arriving within 10 ms in delay after the direct sound produces coloration by comb filter effect over mid- and high-frequency sounds. In addition, less absorbed room resonance modes also have been well known as a major component for coloration in low-frequency sounds. Small size in dimension with recording studio, however, creates difficulty in characterization associated with wave acoustics behavior, that make acoustical optimization more difficult than that of concert hall acoustics. There still remains difficulty in evaluating amount of coloration as well as predicting its acoustical characteristics in acoustical modeling and in other words acoustical tuning technique during construction is regarded as important to optimize acoustics appropriately to the function of recording studio. This paper presents a example of coloration by comb filtering effect and less damped room modes in typical post-processing recording studio. And acoustical design and measurement technique will be presented for adjusting timbre due to coloration based on psycho-acoustical performance with binaural hearing and room resonance control with line array resonator adjusted to the particular room modes considered.

  18. Atmospheric limb sounding with imaging FTS

    Science.gov (United States)

    Friedl-Vallon, Felix; Riese, Martin; Preusse, Peter; Oelhaf, Hermann; Fischer, Herbert

    Imaging Fourier transform spectrometers in the thermal infrared are a promising new class of sensors for atmospheric science. The availability of fast and sensitive large focal plane arrays with appropriate spectral coverage in the infrared region allows the conception and construction of innovative sensors for Nadir and Limb geometry. Instruments in Nadir geometry have already reached prototype status (e.g. Geostationary Imaging Fourier Transform Spectrometer / U. Wisconsin and NASA) or are in Phase A study (infrared sounding mission on Meteosat third generation / ESA and EUMETSAT). The first application of the new technical possibilities to atmospheric limb sounding from space, the Imaging Michelson Interferometer for Passive Atmospheric Sounding (IMIPAS), is currently studied by industry in the context of preparatory work for the next set of ESA earth explorers. The scientific focus of the instrument is on the processes controlling the composition of the mid/upper troposphere and lower stratosphere. The instrument concept of IMIPAS has been conceived at the research centres Karlsruhe and J¨lich. The development of a precursor instrument (GLORIA-AB) at these research institutions u started already in 2005. The instrument will be able to fly on board of various airborne platforms. First scientific missions are planned for the second half of the year 2009 on board the new German research aircraft HALO. This airborne sensor serves its own scientific purpose, but it also provides a test bed to learn about this new instrument class and its peculiarities and to learn to exploit and interpret the wealth of information provided by a limb imaging IR Fourier transform spectrometer. The presentation will discuss design considerations and challenges for GLORIA-AB and put them in the context of the planned satellite application. It will describe the solutions found, present first laboratory figures of merit for the prototype instrument and outline the new scientific

  19. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  20. Infants' Learning, Memory, and Generalization of Learning for Bimodal Events.

    Science.gov (United States)

    Morrongiello, Barbara A.; Lasenby, Jennifer; Lee, Naomi

    2003-01-01

    Two studies examined the impact of temporal synchrony on infants' learning of and memory for sight-sound pairs. Findings indicated that 7-month-olds had no difficulty learning auditory-visual pairs regardless of temporal synchrony, remembering them 10 minutes later and 1 week later. Three-month-olds showed poorer learning in no-synchrony than in…

  1. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    Science.gov (United States)

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short

  2. Wild birds learn to eavesdrop on heterospecific alarm calls.

    Science.gov (United States)

    Magrath, Robert D; Haff, Tonya M; McLachlan, Jessica R; Igic, Branislav

    2015-08-03

    Many vertebrates gain critical information about danger by eavesdropping on other species' alarm calls [1], providing an excellent context in which to study information flow among species in animal communities [2-4]. A fundamental but unresolved question is how individuals recognize other species' alarm calls. Although individuals respond to heterospecific calls that are acoustically similar to their own, alarms vary greatly among species, and eavesdropping probably also requires learning [1]. Surprisingly, however, we lack studies demonstrating such learning. Here, we show experimentally that individual wild superb fairy-wrens, Malurus cyaneus, can learn to recognize previously unfamiliar alarm calls. We trained individuals by broadcasting unfamiliar sounds while simultaneously presenting gliding predatory birds. Fairy-wrens in the experiment originally ignored these sounds, but most fled in response to the sounds after two days' training. The learned response was not due to increased responsiveness in general or to sensitization following repeated exposure and was independent of sound structure. Learning can therefore help explain the taxonomic diversity of eavesdropping and the refining of behavior to suit the local community. In combination with previous work on unfamiliar predator recognition (e.g., [5]), our results imply rapid spread of anti-predator behavior within wild populations and suggest methods for training captive-bred animals before release into the wild [6]. A remaining challenge is to assess the importance and consequences of direct association of unfamiliar sounds with predators, compared with social learning-such as associating unfamiliar sounds with conspecific alarms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    DEFF Research Database (Denmark)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory......-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared...... with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies...

  4. Underwater sound from vessel traffic reduces the effective communication range in Atlantic cod and haddock.

    Science.gov (United States)

    Stanley, Jenni A; Van Parijs, Sofie M; Hatch, Leila T

    2017-11-07

    Stellwagen Bank National Marine Sanctuary is located in Massachusetts Bay off the densely populated northeast coast of the United States; subsequently, the marine inhabitants of the area are exposed to elevated levels of anthropogenic underwater sound, particularly due to commercial shipping. The current study investigated the alteration of estimated effective communication spaces at three spawning locations for populations of the commercially and ecologically important fishes, Atlantic cod (Gadus morhua) and haddock (Melanogrammus aeglefinus). Both the ambient sound pressure levels and the estimated effective vocalization radii, estimated through spherical spreading models, fluctuated dramatically during the three-month recording periods. Increases in sound pressure level appeared to be largely driven by large vessel activity, and accordingly exhibited a significant positive correlation with the number of Automatic Identification System tracked vessels at the two of the three sites. The near constant high levels of low frequency sound and consequential reduction in the communication space observed at these recording sites during times of high vocalization activity raises significant concerns that communication between conspecifics may be compromised during critical biological periods. This study takes the first steps in evaluating these animals' communication spaces and alteration of these spaces due to anthropogenic underwater sound.

  5. A method for creating teaching movie clips using screen recording software: usefulness of teaching movies as self-learning tools for medical students

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seong Su [The Catholic University of Korea, Suwon (Korea, Republic of)

    2007-04-15

    I wanted to describe a method to create teaching movies with using screen recordings, and I wanted to see if self-learning movies are useful for medical students. Teaching movies were created by direct recording of the screen activity and voice narration during the interpretation of educational cases; we used a PACS system and screen recording software for the recording (CamStudio, Rendersoft, U.S.A.). The usefulness of teaching movies for seft-learning of abdominal CT anatomy was evacuated by the medical students. Creating teaching movie clips with using screen recording software was simple and easy. Survey responses were collected from 43 medical students. The contents of teaching movie was adequately understandable (52%) and useful for learning (47%). Only 23% students agreed the these movies helped motivated them to learn. Teaching movies were more useful than still photographs of the teaching image files. The students wanted teaching movies on the cross-sectional CT anatomy of different body regions (82%) and for understanding the radiological interpretation of various diseases (42%). Creating teaching movie by direct screen recording of a radiologist's interpretation process is easy and simple. The teaching video clips reveal a radiologist's interpretation process or the explanation of teaching cases with his/her own voice narration, and it is an effective self-learning tool for medical students and residents.

  6. A method for creating teaching movie clips using screen recording software: usefulness of teaching movies as self-learning tools for medical students

    International Nuclear Information System (INIS)

    Hwang, Seong Su

    2007-01-01

    I wanted to describe a method to create teaching movies with using screen recordings, and I wanted to see if self-learning movies are useful for medical students. Teaching movies were created by direct recording of the screen activity and voice narration during the interpretation of educational cases; we used a PACS system and screen recording software for the recording (CamStudio, Rendersoft, U.S.A.). The usefulness of teaching movies for seft-learning of abdominal CT anatomy was evacuated by the medical students. Creating teaching movie clips with using screen recording software was simple and easy. Survey responses were collected from 43 medical students. The contents of teaching movie was adequately understandable (52%) and useful for learning (47%). Only 23% students agreed the these movies helped motivated them to learn. Teaching movies were more useful than still photographs of the teaching image files. The students wanted teaching movies on the cross-sectional CT anatomy of different body regions (82%) and for understanding the radiological interpretation of various diseases (42%). Creating teaching movie by direct screen recording of a radiologist's interpretation process is easy and simple. The teaching video clips reveal a radiologist's interpretation process or the explanation of teaching cases with his/her own voice narration, and it is an effective self-learning tool for medical students and residents

  7. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  8. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows.

    Science.gov (United States)

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Röösli, Martin; Brink, Mark; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-18

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios-of open, tilted, and closed windows-were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor-indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor-indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows.

  9. Focal versus distributed temporal cortex activity for speech sound category assignment

    Science.gov (United States)

    Bouton, Sophie; Chambon, Valérian; Tyrand, Rémi; Seeck, Margitta; Karkar, Sami; van de Ville, Dimitri; Giraud, Anne-Lise

    2018-01-01

    Percepts and words can be decoded from distributed neural activity measures. However, the existence of widespread representations might conflict with the more classical notions of hierarchical processing and efficient coding, which are especially relevant in speech processing. Using fMRI and magnetoencephalography during syllable identification, we show that sensory and decisional activity colocalize to a restricted part of the posterior superior temporal gyrus (pSTG). Next, using intracortical recordings, we demonstrate that early and focal neural activity in this region distinguishes correct from incorrect decisions and can be machine-decoded to classify syllables. Crucially, significant machine decoding was possible from neuronal activity sampled across different regions of the temporal and frontal lobes, despite weak or absent sensory or decision-related responses. These findings show that speech-sound categorization relies on an efficient readout of focal pSTG neural activity, while more distributed activity patterns, although classifiable by machine learning, instead reflect collateral processes of sensory perception and decision. PMID:29363598

  10. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    Science.gov (United States)

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  11. Alternative Paths to Hearing (A Conjecture. Photonic and Tactile Hearing Systems Displaying the Frequency Spectrum of Sound

    Directory of Open Access Journals (Sweden)

    E. H. Hara

    2006-01-01

    Full Text Available In this article, the hearing process is considered from a system engineering perspective. For those with total hearing loss, a cochlear implant is the only direct remedy. It first acts as a spectrum analyser and then electronically stimulates the neurons in the cochlea with a number of electrodes. Each electrode carries information on the separate frequency bands (i.e., spectrum of the original sound signal. The neurons then relay the signals in a parallel manner to the section of the brain where sound signals are processed. Photonic and tactile hearing systems displaying the spectrum of sound are proposed as alternative paths to the section of the brain that processes sound. In view of the plasticity of the brain, which can rewire itself, the following conjectures are offered. After a certain period of training, a person without the ability to hear should be able to decipher the patterns of photonic or tactile displays of the sound spectrum and learn to ‘hear’. This is very similar to the case of a blind person learning to ‘read’ by recognizing the patterns created by the series of bumps as their fingers scan the Braille writing. The conjectures are yet to be tested. Designs of photonic and tactile systems displaying the sound spectrum are outlined.

  12. Chronic scream sound exposure alters memory and monoamine levels in female rat brain.

    Science.gov (United States)

    Hu, Lili; Zhao, Xiaoge; Yang, Juan; Wang, Lumin; Yang, Yang; Song, Tusheng; Huang, Chen

    2014-10-01

    Chronic scream sound alters the cognitive performance of male rats and their brain monoamine levels, these stress-induced alterations are sexually dimorphic. To determine the effects of sound stress on female rats, we examined their serum corticosterone levels and their adrenal, splenic, and thymic weights, their cognitive performance and the levels of monoamine neurotransmitters and their metabolites in the brain. Adult female Sprague-Dawley rats, with and without exposure to scream sound (4h/day for 21 day) were tested for spatial learning and memory using a Morris water maze. Stress decreased serum corticosterone levels, as well as splenic and adrenal weight. It also impaired spatial memory but did not affect the learning ability. Monoamines and metabolites were measured in the prefrontal cortex (PFC), striatum, hypothalamus, and hippocampus. The dopamine (DA) levels in the PFC decreased but the homovanillic acid/DA ratio increased. The decreased DA and the increased 5-hydroxyindoleacetic acid (5-HIAA) levels were observed in the striatum. Only the 5-HIAA level increased in the hypothalamus. In the hippocampus, stress did not affect the levels of monoamines and metabolites. The results suggest that scream sound stress influences most physiologic parameters, memory, and the levels of monoamine neurotransmitter and their metabolites in female rats. Copyright © 2014. Published by Elsevier Inc.

  13. Searching for learning-dependent changes in the antennal lobe: simultaneous recording of neural activity and aversive olfactory learning in honeybees

    Directory of Open Access Journals (Sweden)

    Edith Roussel

    2010-09-01

    Full Text Available Plasticity in the honeybee brain has been studied using the appetitive olfactory conditioning of the proboscis extension reflex, in which a bee learns the association between an odor and a sucrose reward. In this framework, coupling behavioral measurements of proboscis extension and invasive recordings of neural activity has been difficult because proboscis movements usually introduce brain movements that affect physiological preparations. Here we took advantage of a new conditioning protocol, the aversive olfactory conditioning of the sting extension reflex, which does not generate this problem. We achieved the first simultaneous recordings of conditioned sting extension responses and calcium imaging of antennal lobe activity, thus revealing on-line processing of olfactory information during conditioning trials. Based on behavioral output we distinguished learners and non-learners and analyzed possible learning-dependent changes in antennal lobe activity. We did not find differences between glomerular responses to the CS+ and the CS- in learners. Unexpectedly, we found that during conditioning trials non-learners exhibited a progressive decrease in physiological responses to odors, irrespective of their valence. This effect could neither be attributed to a fitness problem nor to abnormal dye bleaching. We discuss the absence of learning-induced changes in the antennal lobe of learners and the decrease in calcium responses found in non-learners. Further studies will have to extend the search for functional plasticity related to aversive learning to other brain areas and to look on a broader range of temporal scales

  14. Computerised Analysis of Telemonitored Respiratory Sounds for Predicting Acute Exacerbations of COPD

    Directory of Open Access Journals (Sweden)

    Miguel Angel Fernandez-Granero

    2015-10-01

    Full Text Available Chronic obstructive pulmonary disease (COPD is one of the commonest causes of death in the world and poses a substantial burden on healthcare systems and patients’ quality of life. The largest component of the related healthcare costs is attributable to admissions due to acute exacerbation (AECOPD. The evidence that might support the effectiveness of the telemonitoring interventions in COPD is limited partially due to the lack of useful predictors for the early detection of AECOPD. Electronic stethoscopes and computerised analyses of respiratory sounds (CARS techniques provide an opportunity for substantial improvement in the management of respiratory diseases. This exploratory study aimed to evaluate the feasibility of using: (a a respiratory sensor embedded in a self-tailored housing for ageing users; (b a telehealth framework; (c CARS and (d machine learning techniques for the remote early detection of the AECOPD. In a 6-month pilot study, 16 patients with COPD were equipped with a home base-station and a sensor to daily record their respiratory sounds. Principal component analysis (PCA and a support vector machine (SVM classifier was designed to predict AECOPD. 75.8% exacerbations were early detected with an average of 5 ± 1.9 days in advance at medical attention. The proposed method could provide support to patients, physicians and healthcare systems.

  15. Using electronic storybooks to support word learning in children with severe language impairments.

    Science.gov (United States)

    Smeets, Daisy J H; van Dijken, Marianne J; Bus, Adriana G

    2014-01-01

    Novel word learning is reported to be problematic for children with severe language impairments (SLI). In this study, we tested electronic storybooks as a tool to support vocabulary acquisition in SLI children. In Experiment 1, 29 kindergarten SLI children heard four e-books each four times: (a) two stories were presented as video books with motion pictures, music, and sounds, and (b) two stories included only static illustrations without music or sounds. Two other stories served as the control condition. Both static and video books were effective in increasing knowledge of unknown words, but static books were most effective. Experiment 2 was designed to examine which elements in video books interfere with word learning: video images or music or sounds. A total of 23 kindergarten SLI children heard 8 storybooks each four times: (a) two static stories without music or sounds, (b) two static stories with music or sounds, (c) two video stories without music or sounds, and (d) two video books with music or sounds. Video images and static illustrations were equally effective, but the presence of music or sounds moderated word learning. In children with severe SLI, background music interfered with learning. Problems with speech perception in noisy conditions may be an underlying factor of SLI and should be considered in selecting teaching aids and learning environments. © Hammill Institute on Disabilities 2012.

  16. Global Bathymetry: Machine Learning for Data Editing

    Science.gov (United States)

    Sandwell, D. T.; Tea, B.; Freund, Y.

    2017-12-01

    The accuracy of global bathymetry depends primarily on the coverage and accuracy of the sounding data and secondarily on the depth predicted from gravity. A main focus of our research is to add newly-available data to the global compilation. Most data sources have 1-12% of erroneous soundings caused by a wide array of blunders and measurement errors. Over the years we have hand-edited this data using undergraduate employees at UCSD (440 million soundings at 500 m resolution). We are developing a machine learning approach to refine the flagging of the older soundings and provide automated editing of newly-acquired soundings. The approach has three main steps: 1) Combine the sounding data with additional information that may inform the machine learning algorithm. The additional parameters include: depth predicted from gravity; distance to the nearest sounding from other cruises; seafloor age; spreading rate; sediment thickness; and vertical gravity gradient. 2) Use available edit decisions as training data sets for a boosted tree algorithm with a binary logistic objective function and L2 regularization. Initial results with poor quality single beam soundings show that the automated algorithm matches the hand-edited data 89% of the time. The results show that most of the information for detecting outliers comes from predicted depth with secondary contributions from distance to the nearest sounding and longitude. A similar analysis using very high quality multibeam data shows that the automated algorithm matches the hand-edited data 93% of the time. Again, most of the information for detecting outliers comes from predicted depth secondary contributions from distance to the nearest sounding and longitude. 3) The third step in the process is to use the machine learning parameters, derived from the training data, to edit 12 million newly acquired single beam sounding data provided by the National Geospatial-Intelligence Agency. The output of the learning algorithm will be

  17. Four odontocete species change hearing levels when warned of impending loud sound.

    Science.gov (United States)

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  18. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  19. Physiological and psychological assessment of sound

    Science.gov (United States)

    Yanagihashi, R.; Ohira, Masayoshi; Kimura, Teiji; Fujiwara, Takayuki

    The psycho-physiological effects of several sound stimulations were investigated to evaluate the relationship between a psychological parameter, such as subjective perception, and a physiological parameter, such as the heart rate variability (HRV). Eight female students aged 21-22 years old were tested. Electrocardiogram (ECG) and the movement of the chest-wall for estimating respiratory rate were recorded during three different sound stimulations; (1) music provided by a synthesizer (condition A); (2) birds twitters (condition B); and (3) mechanical sounds (condition C). The percentage power of the low-frequency (LF; 0.05<=0.15 Hz) and high-frequency (HF; 0.15<=0.40 Hz) components in the HRV (LF%, HF%) were assessed by a frequency analysis of time-series data for 5 min obtained from R-R intervals in the ECG. Quantitative assessment of subjective perception was also described by a visual analog scale (VAS). The HF% and VAS value for comfort in C were significantly lower than in either A and/or B. The respiratory rate and VAS value for awakening in C were significantly higher than in A and/or B. There was a significant correlation between the HF% and the value of the VAS, and between the respiratory rate and the value of the VAS. These results indicate that mechanical sounds similar to C inhibit the para-sympathetic nervous system and promote a feeling that is unpleasant but alert, also suggesting that the HRV reflects subjective perception.

  20. Beaches and Bluffs of Puget Sound and the Northern Straits

    Science.gov (United States)

    2007-04-01

    sand up to pebbles, cobbles, and occasionally boulders, often also containing shelly material. Puget Sound beaches commonly have two distinct...very limited historic wind records (wave hind- casting ). Drift directions indicated in the Atlas have repeatedly been proven inaccurate (Johannessen

  1. Three-dimensional sound localisation with a lizard peripheral auditory model

    DEFF Research Database (Denmark)

    Kjær Schmidt, Michael; Shaikh, Danish

    the networks learned a transfer function that translated the three-dimensional non-linear mapping into estimated azimuth and elevation values for the acoustic target. The neural network with two hidden layers as expected performed better than that with only one hidden layer. Our approach assumes that for any...... location of an acoustic target in three dimensions. Our approach utilises a model of the peripheral auditory system of lizards [Christensen-Dalsgaard and Manley 2005] coupled with a multi-layer perceptron neural network. The peripheral auditory model’s response to sound input encodes sound direction...... information in a single plane which by itself is insufficient to localise the acoustic target in three dimensions. A multi-layer perceptron neural network is used to combine two independent responses of the model, corresponding to two rotational movements, into an estimate of the sound direction in terms...

  2. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    Science.gov (United States)

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  3. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  4. An investigation of the usability of sound recognition for source separation of packaging wastes in reverse vending machines.

    Science.gov (United States)

    Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal

    2016-10-01

    In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Subjective evaluation of restaurant acoustics in a virtual sound environment

    DEFF Research Database (Denmark)

    Nielsen, Nicolaj Østergaard; Marschall, Marton; Santurette, Sébastien

    2016-01-01

    Many restaurants have smooth rigid surfaces made of wood, steel, glass, and concrete. This often results in a lack of sound absorption. Such restaurants are notorious for high sound noise levels during service that most owners actually desire for representing vibrant eating environments, although...... surveys report that noise complaints are on par with poor service. This study investigated the relation between objective acoustic parameters and subjective evaluation of acoustic comfort at five restaurants in terms of three parameters: noise annoyance, speech intelligibility, and privacy. At each...... location, customers filled out questionnaire surveys, acoustic parameters were measured, and recordings of restaurant acoustic scenes were obtained with a 64-channel spherical array. The acoustic scenes were reproduced in a virtual sound environment (VSE) with 64 loudspeakers placed in an anechoic room...

  6. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  7. Understanding the Doppler Effect by Analysing Spectrograms of the Sound of a Passing Vehicle

    Science.gov (United States)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-01-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a…

  8. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  9. Respiratory flow-sound relationship during both wakefulness and sleep and its variation in relation to sleep apnea.

    Science.gov (United States)

    Yadollahi, Azadeh; Montazeri, Aman; Azarbarzin, Ali; Moussavi, Zahra

    2013-03-01

    Tracheal respiratory sound analysis is a simple and non-invasive way to study the pathophysiology of the upper airway and has recently been used for acoustic estimation of respiratory flow and sleep apnea diagnosis. However in none of the previous studies was the respiratory flow-sound relationship studied in people with obstructive sleep apnea (OSA), nor during sleep. In this study, we recorded tracheal sound, respiratory flow, and head position from eight non-OSA and 10 OSA individuals during sleep and wakefulness. We compared the flow-sound relationship and variations in model parameters from wakefulness to sleep within and between the two groups. The results show that during both wakefulness and sleep, flow-sound relationship follows a power law but with different parameters. Furthermore, the variations in model parameters may be representative of the OSA pathology. The other objective of this study was to examine the accuracy of respiratory flow estimation algorithms during sleep: we investigated two approaches for calibrating the model parameters using the known data recorded during either wakefulness or sleep. The results show that the acoustical respiratory flow estimation parameters change from wakefulness to sleep. Therefore, if the model is calibrated using wakefulness data, although the estimated respiratory flow follows the relative variations of the real flow, the quantitative flow estimation error would be high during sleep. On the other hand, when the calibration parameters are extracted from tracheal sound and respiratory flow recordings during sleep, the respiratory flow estimation error is less than 10%.

  10. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  11. International perception of lung sounds: a comparison of classification across some European borders.

    Science.gov (United States)

    Aviles-Solis, Juan Carlos; Vanbelle, Sophie; Halvorsen, Peder A; Francis, Nick; Cals, Jochen W L; Andreeva, Elena A; Marques, Alda; Piirilä, Päivi; Pasterkamp, Hans; Melbye, Hasse

    2017-01-01

    Lung auscultation is helpful in the diagnosis of lung and heart diseases; however, the diagnostic value of lung sounds may be questioned due to interobserver variation. This situation may also impair clinical research in this area to generate evidence-based knowledge about the role that chest auscultation has in a modern clinical setting. The recording and visual display of lung sounds is a method that is both repeatable and feasible to use in large samples, and the aim of this study was to evaluate interobserver agreement using this method. With a microphone in a stethoscope tube, we collected digital recordings of lung sounds from six sites on the chest surface in 20 subjects aged 40 years or older with and without lung and heart diseases. A total of 120 recordings and their spectrograms were independently classified by 28 observers from seven different countries. We employed absolute agreement and kappa coefficients to explore interobserver agreement in classifying crackles and wheezes within and between subgroups of four observers. When evaluating agreement on crackles (inspiratory or expiratory) in each subgroup, observers agreed on between 65% and 87% of the cases. Conger's kappa ranged from 0.20 to 0.58 and four out of seven groups reached a kappa of ≥0.49. In the classification of wheezes, we observed a probability of agreement between 69% and 99.6% and kappa values from 0.09 to 0.97. Four out of seven groups reached a kappa ≥0.62. The kappa values we observed in our study ranged widely but, when addressing its limitations, we find the method of recording and presenting lung sounds with spectrograms sufficient for both clinic and research. Standardisation of terminology across countries would improve international communication on lung auscultation findings.

  12. Differences between Outdoor and Indoor Sound Levels for Open, Tilted, and Closed Windows

    Science.gov (United States)

    Locher, Barbara; Piquerez, André; Habermacher, Manuel; Ragettli, Martina; Cajochen, Christian; Vienneau, Danielle; Foraster, Maria; Müller, Uwe; Wunderli, Jean Marc

    2018-01-01

    Noise exposure prediction models for health effect studies normally estimate free field exposure levels outside. However, to assess the noise exposure inside dwellings, an estimate of indoor sound levels is necessary. To date, little field data is available about the difference between indoor and outdoor noise levels and factors affecting the damping of outside noise. This is a major cause of uncertainty in indoor noise exposure prediction and may lead to exposure misclassification in health assessments. This study aims to determine sound level differences between the indoors and the outdoors for different window positions and how this sound damping is related to building characteristics. For this purpose, measurements were carried out at home in a sample of 102 Swiss residents exposed to road traffic noise. Sound pressure level recordings were performed outdoors and indoors, in the living room and in the bedroom. Three scenarios—of open, tilted, and closed windows—were recorded for three minutes each. For each situation, data on additional parameters such as the orientation towards the source, floor, and room, as well as sound insulation characteristics were collected. On that basis, linear regression models were established. The median outdoor–indoor sound level differences were of 10 dB(A) for open, 16 dB(A) for tilted, and 28 dB(A) for closed windows. For open and tilted windows, the most relevant parameters affecting the outdoor–indoor differences were the position of the window, the type and volume of the room, and the age of the building. For closed windows, the relevant parameters were the sound level outside, the material of the window frame, the existence of window gaskets, and the number of windows. PMID:29346318

  13. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  14. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  15. Heart sounds analysis via esophageal stethoscope system in beagles.

    Science.gov (United States)

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  16. The 2011 marine heat wave in Cockburn Sound, southwest Australia

    Directory of Open Access Journals (Sweden)

    T. H. Rose

    2012-07-01

    Full Text Available Over 2000 km of Western Australian coastline experienced a significant marine heat wave in February and March 2011. Seawater temperature anomalies of +2–4 °C were recorded at a number of locations, and satellite-derived SSTs (sea surface temperatures were the highest on record. Here, we present seawater temperatures from southwestern Australia and describe, in detail, the marine climatology of Cockburn Sound, a large, multiple-use coastal embayment. We compared temperature and dissolved oxygen levels in 2011 with data from routine monitoring conducted from 2002–2010. A significant warming event, 2–4 °C in magnitude, persisted for > 8 weeks, and seawater temperatures at 10 to 20 m depth were significantly higher than those recorded in the previous 9 yr. Dissolved oxygen levels were depressed at most monitoring sites, being ~ 2 mg l−1 lower than usual in early March 2011. Ecological responses to short-term extreme events are poorly understood, but evidence from elsewhere along the Western Australian coastline suggests that the heat wave was associated with high rates of coral bleaching; fish, invertebrate and macroalgae mortalities; and algal blooms. However, there is a paucity of historical information on ecologically-sensitive habitats and taxa in Cockburn Sound, so that formal examinations of biological responses to the heat wave were not possible. The 2011 heat wave provided insights into conditions that may become more prevalent in Cockburn Sound, and elsewhere, if the intensity and frequency of short-term extreme events increases as predicted.

  17. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  18. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  19. Illustrations and supporting texts for sound standing waves of air columns in pipes in introductory physics textbooks

    Directory of Open Access Journals (Sweden)

    Liang Zeng

    2014-07-01

    Full Text Available In our pilot studies, we found that many introductory physics textbook illustrations with supporting text for sound standing waves of air columns in open-open, open-closed, and closed-closed pipes inhibit student understanding of sound standing wave phenomena due to student misunderstanding of how air molecules move within these pipes. Based on the construct of meaningful learning from cognitive psychology and semiotics, a quasiexperimental study was conducted to investigate the comparative effectiveness of two alternative approaches to student understanding: a traditional textbook illustration approach versus a newly designed air molecule motion illustration approach. Thirty volunteer students from introductory physics classes were randomly assigned to two groups of 15 each. Both groups were administered a presurvey. Then, group A read the air molecule motion illustration handout, and group B read a traditional textbook illustration handout; both groups were administered postsurveys. Subsequently, the procedure was reversed: group B read the air molecule motion illustration handout and group A read the traditional textbook illustration handout. This was followed by a second postsurvey along with an exit research questionnaire. The study found that the majority of students experienced meaningful learning and stated that they understood sound standing wave phenomena significantly better using the air molecule motion illustration approach. This finding provides a method for physics education researchers to design illustrations for abstract sound standing wave concepts, for publishers to improve their illustrations with supporting text, and for instructors to facilitate deeper learning in their students on sound standing waves.

  20. Evidence of sound production by spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain

    Science.gov (United States)

    Johnson, Nicholas S.; Higgs, Dennis; Binder, Thomas R.; Marsden, J. Ellen; Buchinger, Tyler John; Brege, Linnea; Bruning, Tyler; Farha, Steve A.; Krueger, Charles C.

    2018-01-01

    Two sounds associated with spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain were characterized by comparing sound recordings to behavioral data collected using acoustic telemetry and video. These sounds were named growls and snaps, and were heard on lake trout spawning reefs, but not on a non-spawning reef, and were more common at night than during the day. Growls also occurred more often during the spawning period than the pre-spawning period, while the trend for snaps was reversed. In a laboratory flume, sounds occurred when male lake trout were displaying spawning behaviors; growls when males were quivering and parallel swimming, and snaps when males moved their jaw. Combining our results with the observation of possible sound production by spawning splake (Salvelinus fontinalis × Salvelinus namaycush hybrid), provides rare evidence for spawning-related sound production by a salmonid, or any other fish in the superorder Protacanthopterygii. Further characterization of these sounds could be useful for lake trout assessment, restoration, and control.

  1. Enhanced Memory Consolidation Via Automatic Sound Stimulation During Non-REM Sleep.

    Science.gov (United States)

    Leminen, Miika M; Virkkala, Jussi; Saure, Emma; Paajanen, Teemu; Zee, Phyllis C; Santostasi, Giovanni; Hublin, Christer; Müller, Kiti; Porkka-Heiskanen, Tarja; Huotilainen, Minna; Paunio, Tiina

    2017-03-01

    Slow-wave sleep (SWS) slow waves and sleep spindle activity have been shown to be crucial for memory consolidation. Recently, memory consolidation has been causally facilitated in human participants via auditory stimuli phase-locked to SWS slow waves. Here, we aimed to develop a new acoustic stimulus protocol to facilitate learning and to validate it using different memory tasks. Most importantly, the stimulation setup was automated to be applicable for ambulatory home use. Fifteen healthy participants slept 3 nights in the laboratory. Learning was tested with 4 memory tasks (word pairs, serial finger tapping, picture recognition, and face-name association). Additional questionnaires addressed subjective sleep quality and overnight changes in mood. During the stimulus night, auditory stimuli were adjusted and targeted by an unsupervised algorithm to be phase-locked to the negative peak of slow waves in SWS. During the control night no sounds were presented. Results showed that the sound stimulation increased both slow wave (p = .002) and sleep spindle activity (p memory performance was compared between stimulus and control nights, we found a significant effect in word pair task but not in other memory tasks. The stimulation did not affect sleep structure or subjective sleep quality. We showed that the memory effect of the SWS-targeted individually triggered single-sound stimulation is specific to verbal associative memory. Moreover, the ambulatory and automated sound stimulus setup was promising and allows for a broad range of potential follow-up studies in the future. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society].

  2. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers.

    Science.gov (United States)

    Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were

  3. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  4. Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.

    Science.gov (United States)

    Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric

    2014-12-15

    Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.

  5. From stereoscopic recording to virtual reality headsets: Designing a new way to learn surgery.

    Science.gov (United States)

    Ros, M; Trives, J-V; Lonjon, N

    2017-03-01

    To improve surgical practice, there are several different approaches to simulation. Due to wearable technologies, recording 3D movies is now easy. The development of a virtual reality headset allows imagining a different way of watching these videos: using dedicated software to increase interactivity in a 3D immersive experience. The objective was to record 3D movies via a main surgeon's perspective, to watch files using virtual reality headsets and to validate pedagogic interest. Surgical procedures were recorded using a system combining two side-by-side cameras placed on a helmet. We added two LEDs just below the cameras to enhance luminosity. Two files were obtained in mp4 format and edited using dedicated software to create 3D movies. Files obtained were then played using a virtual reality headset. Surgeons who tried the immersive experience completed a questionnaire to evaluate the interest of this procedure for surgical learning. Twenty surgical procedures were recorded. The movies capture a scene which is extended 180° horizontally and 90° vertically. The immersive experience created by the device conveys a genuine feeling of being in the operating room and seeing the procedure first-hand through the eyes of the main surgeon. All surgeons indicated that they believe in pedagogical interest of this method. We succeeded in recording the main surgeon's point of view in 3D and watch it on a virtual reality headset. This new approach enhances the understanding of surgery; most of the surgeons appreciated its pedagogic value. This method could be an effective learning tool in the future. Copyright © 2016. Published by Elsevier Masson SAS.

  6. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  7. Automatic adventitious respiratory sound analysis: A systematic review.

    Directory of Open Access Journals (Sweden)

    Renard Xaviero Adhi Pramono

    (11.69% on rhonchi, and 18 (23.38% on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis.Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions.A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.

  8. Automatic adventitious respiratory sound analysis: A systematic review.

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    .69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.

  9. Tech-Assisted Language Learning Tasks in an EFL Setting: Use of Hand phone Recording Feature

    Directory of Open Access Journals (Sweden)

    Alireza Shakarami

    2014-09-01

    Full Text Available Technology with its speedy great leaps forward has undeniable impact on every aspect of our life in the new millennium. It has supplied us with different affordances almost daily or more precisely in a matter of hours. Technology and Computer seems to be a break through as for their roles in the Twenty-First century educational system. Examples are numerous, among which CALL, CMC, and Virtual learning spaces come to mind instantly. Amongst the newly developed gadgets of today are the sophisticated smart Hand phones which are far more ahead of a communication tool once designed for. Development of Hand phone as a wide-spread multi-tasking gadget has urged researchers to investigate its effect on every aspect of learning process including language learning. This study attempts to explore the effects of using cell phone audio recording feature, by Iranian EFL learners, on the development of their speaking skills. Thirty-five sophomore students were enrolled in a pre-posttest designed study. Data on their English speaking experience using audio–recording features of their Hand phones were collected. At the end of the semester, the performance of both groups, treatment and control, were observed, evaluated, and analyzed; thereafter procured qualitatively at the next phase. The quantitative outcome lent support to integrating Hand phones as part of the language learning curriculum. Keywords:

  10. High frequency components of tracheal sound are emphasized during prolonged flow limitation

    International Nuclear Information System (INIS)

    Tenhunen, M; Huupponen, E; Saastamoinen, A; Kulkas, A; Himanen, S-L; Rauhala, E

    2009-01-01

    A nasal pressure transducer, which is used to study nocturnal airflow, also provides information about the inspiratory flow waveform. A round flow shape is presented during normal breathing. A flattened, non-round shape is found during hypopneas and it can also appear in prolonged episodes. The significance of this prolonged flow limitation is still not established. A tracheal sound spectrum has been analyzed further in order to achieve additional information about breathing during sleep. Increased sound frequencies over 500 Hz have been connected to obstruction of the upper airway. The aim of the present study was to examine the tracheal sound signal content of prolonged flow limitation and to find out whether prolonged flow limitation would consist of abundant high frequency activity. Sleep recordings of 36 consecutive patients were examined. The tracheal sound spectral analysis was performed on 10 min episodes of prolonged flow limitation, normal breathing and periodic apnea-hypopnea breathing. The highest total spectral amplitude, implicating loudest sounds, occurred during flow-limited breathing which also presented loudest sounds in all frequency bands above 100 Hz. In addition, the tracheal sound signal during flow-limited breathing constituted proportionally more high frequency activities compared to normal breathing and even periodic apnea-hypopnea breathing

  11. Repetition Suppression in the Left Inferior Frontal Gyrus Predicts Tone Learning Performance.

    Science.gov (United States)

    Asaridou, Salomi S; Takashima, Atsuko; Dediu, Dan; Hagoort, Peter; McQueen, James M

    2016-06-01

    Do individuals differ in how efficiently they process non-native sounds? To what extent do these differences relate to individual variability in sound-learning aptitude? We addressed these questions by assessing the sound-learning abilities of Dutch native speakers as they were trained on non-native tone contrasts. We used fMRI repetition suppression to the non-native tones to measure participants' neuronal processing efficiency before and after training. Although all participants improved in tone identification with training, there was large individual variability in learning performance. A repetition suppression effect to tone was found in the bilateral inferior frontal gyri (IFGs) before training. No whole-brain effect was found after training; a region-of-interest analysis, however, showed that, after training, repetition suppression to tone in the left IFG correlated positively with learning. That is, individuals who were better in learning the non-native tones showed larger repetition suppression in this area. Crucially, this was true even before training. These findings add to existing evidence that the left IFG plays an important role in sound learning and indicate that individual differences in learning aptitude stem from differences in the neuronal efficiency with which non-native sounds are processed. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  13. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  14. Biography, identity, improvisation, sound: intersections of personal and social identity through improvisation

    NARCIS (Netherlands)

    Smilde, Rineke

    2016-01-01

    This essay addresses the relationship of improvisation and identity. Biographical research that was conducted by the author into professional musicians’ lifelong learning showed the huge importance of improvisation for personal expression. Musically, the concept of sound appeared to serve as a

  15. Screening of snoring with an MP3 recorder.

    Science.gov (United States)

    Kreivi, Hanna-Riikka; Salmi, Tapani; Maasilta, Paula; Bachour, Adel

    2013-03-01

    Snoring patients seeking medical assistance represent a wide range of clinical and sleep study findings from nonsleepy nonapneic snoring to severe obstructive sleep apnea syndrome. The prevalence of snoring is high and it significantly impacts quality of life. Its objective diagnosis usually requires a sleep study. We developed a system to analyze snoring sounds with a Moving Picture Experts Group Layer-3 Audio (MP3) recorder device and present its value in the screening of snoring. We recorded snoring sounds during in-lab polysomnography (PSG) in 200 consecutive patients referred for a suspicion of obstructive sleep apnea. Snoring was recorded during the PSG with two microphones: one attached to the throat and the other to the ceiling; an MP3 device was attached to the patient's collar. Snoring was confirmed when the MP3 acoustic signal exceeded twice the median value of the acoustic signal for the entire recording. Results of the MP3 snoring recording were compared to the snoring recordings from the PSG. MP3 recording proved technically successful for 87% of the patients. The Pearson correlation between PSG snoring and MP3 snoring was highly significant at 0.77 (p MP3 recording device underestimated the snoring time by a mean ± SD of 32 ± 55 min. The recording of snoring with an MP3 device provides reliable information about the patient's snoring.

  16. 3D-Audio Matting, Postediting, and Rerendering from Field Recordings

    Directory of Open Access Journals (Sweden)

    Guillaume Lemaitre

    2007-01-01

    Full Text Available We present a novel approach to real-time spatial rendering of realistic auditory environments and sound sources recorded live, in the field. Using a set of standard microphones distributed throughout a real-world environment, we record the sound field simultaneously from several locations. After spatial calibration, we segment from this set of recordings a number of auditory components, together with their location. We compare existing time delay of arrival estimation techniques between pairs of widely spaced microphones and introduce a novel efficient hierarchical localization algorithm. Using the high-level representation thus obtained, we can edit and rerender the acquired auditory scene over a variety of listening setups. In particular, we can move or alter the different sound sources and arbitrarily choose the listening position. We can also composite elements of different scenes together in a spatially consistent way. Our approach provides efficient rendering of complex soundscapes which would be challenging to model using discrete point sources and traditional virtual acoustics techniques. We demonstrate a wide range of possible applications for games, virtual and augmented reality, and audio visual post production.

  17. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Science.gov (United States)

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  19. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    Directory of Open Access Journals (Sweden)

    Xuhai Chen

    Full Text Available Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  20. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  1. Sound production and pectoral spine locking in a Neotropical catfish (Iheringichthys labrosus, Pimelodidae

    Directory of Open Access Journals (Sweden)

    Javier S. Tellechea

    Full Text Available Catfishes may have two sonic organs: pectoral spines for stridulation and swimbladder drumming muscles. The aim of this study was to characterize the sound production of the catfish Iheringichthys labrosus. The I. labrosus male and female emits two different types of sounds: stridulatory sounds (655.8 + 230 Hz consisting of a train of pulses, and drumming sounds (220 + 46 Hz, which are composed of single-pulse harmonic signals. Stridulatory sounds are emitted during abduction of the pectoral spine. At the base of the spine there is a dorsal process that bears a series of ridges on its latero-ventral surface, and by pressing the ridges against the groove (with an unspecialized rough surface during a fin sweep, the animal produce a series of short pulses. Drumming sound is produced by an extrinsic sonic muscle, originated on a flat tendon of the transverse process of the fourth vertebra and inserted on the rostral and ventral surface of the swimbladder. The sounds emitted by both mechanisms are emitted in distress situation. Distress was induced by manipulating fish in a laboratory tank while sounds were recorded. Our results indicate that the catfish initially emits a stridulatory sound, which is followed by a drumming sound. Simultaneous production of stridulatory and drumming sounds was also observed. The catfish drumming sounds were lower in dominant frequency than stridulatory sounds, and also exhibited a small degree of dominant frequency modulation. Another behaviour observed in this catfish was the pectoral spine locking. This reaction was always observed before the distress sound production. Like other authors outline, our results suggest that in the catfish I. labrosus stridulatory and drumming sounds may function primarily as a distress call.

  2. Onboard Acoustic Recording from Diving Elephant Seals

    National Research Council Canada - National Science Library

    Fletcher, Stacia

    1996-01-01

    The aim of this project was to record sounds impinging on free-ranging northern elephant seals, Mirounga angustirostris, a first step in determining the importance of LFS to these animals as they dive...

  3. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  4. Gateway of Sound: Reassessing the Role of Audio Mastering in the Art of Record Production

    Directory of Open Access Journals (Sweden)

    Carlo Nardi

    2014-06-01

    Full Text Available Audio mastering, notwithstanding an apparent lack of scholarly attention, is a crucial gateway between production and consumption and, as such, is worth further scrutiny, especially in music genres like house or techno, which place great emphasis on sound production qualities. In this article, drawing on personal interviews with mastering engineers and field research in mastering studios in Italy and Germany, I investigate the practice of mastering engineering, paying close attention to the negotiation of techniques and sound aesthetics in relation to changes in the industry formats and, in particular, to the growing shift among DJs from vinyl to compressed digital formats. I then discuss the specificity of audio mastering in relation to EDM, insofar as DJs and controllerists conceive of the master, rather than as a finished product destined to listening, as raw material that can be reworked in performance.

  5. THE INTONATION AND SOUND CHARACTERISTICS OF ADVERTISING PRONUNCIATION STYLE

    Directory of Open Access Journals (Sweden)

    Chernyavskaya Elena Sergeevna

    2014-06-01

    Full Text Available The article aims at describing the intonation and sound characteristics of advertising phonetic style. On the basis of acoustic analysis of transcripts of radio advertising tape recordings, broadcasted at different radio stations, as well as in the result of processing the representative part of phrases with the help of special computer programs, the author determines the parameters of superfix means. The article proves that the stylistic parameters of advertising phonetic style are oriented on modern orthoepy, and that the originality of radio advertising sounding is determined by two tendencies – the reduction of stressed vowels duration in the terminal and non-terminal word and the increase of pre-tonic and post-tonic vowels duration of non-terminal word in a phrase. The article also shows that the peculiarity of rhythmic structure of terminal and non-terminal word in radio advertising is formed by means of leveling stressed and unstressed sounds in length. The specificity of intonational structure of an advertising text consists in the following peculiarities: matching of syntactic and syntagmatic division, which allows to denote the blocks of semantic models, forming the text of radio advertising; the allocation of keywords into separate syntagmas; the design of informative parts of advertising text by means of symmetric length correlation of minimal speech segments; the combination of interstyle prosodic elements in the framework of sounding text. Thus, the conducted analysis allowed to conclude, that the texts of sounding advertising are designed using special pronunciation style, marked by sound duration.

  6. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  7. Mapping Phonetic Features for Voice-Driven Sound Synthesis

    Science.gov (United States)

    Janer, Jordi; Maestre, Esteban

    In applications where the human voice controls the synthesis of musical instruments sounds, phonetics convey musical information that might be related to the sound of the imitated musical instrument. Our initial hypothesis is that phonetics are user- and instrument-dependent, but they remain constant for a single subject and instrument. We propose a user-adapted system, where mappings from voice features to synthesis parameters depend on how subjects sing musical articulations, i.e. note to note transitions. The system consists of two components. First, a voice signal segmentation module that automatically determines note-to-note transitions. Second, a classifier that determines the type of musical articulation for each transition based on a set of phonetic features. For validating our hypothesis, we run an experiment where subjects imitated real instrument recordings with their voice. Performance recordings consisted of short phrases of saxophone and violin performed in three grades of musical articulation labeled as: staccato, normal, legato. The results of a supervised training classifier (user-dependent) are compared to a classifier based on heuristic rules (user-independent). Finally, from the previous results we show how to control the articulation in a sample-concatenation synthesizer by selecting the most appropriate samples.

  8. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  9. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    Science.gov (United States)

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs.

    Science.gov (United States)

    Donohue, Kelly C; Spencer, Rebecca M C

    2011-06-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening and recall was probed following an interval with overnight sleep. Participants encoded the pairs with the sound of "the ocean" from a sound machine. The first group slept with this sound; the second group slept with a different sound ("rain"); and the third group slept with no sound. Sleeping with sound had no impact on subsequent recall. Although a null result, this work provides an important test of the implications of context effects on sleep-dependent memory consolidation.

  11. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  12. The Sounds of Picturebooks for English Language Learning

    Directory of Open Access Journals (Sweden)

    M. Teresa Fleta Guillén

    2017-05-01

    Full Text Available Picturebooks have long been recognised to aid language development in both first and second language acquisition. This paper investigates the relevance of the acoustic elements of picturebooks to raise phonological awareness and to fine-tune listening. In order to enhance the learners’ aural and oral skills for English language development, the paper proposes that listening to stories from picturebooks plays a most important role for raising awareness of the sound system of English in child second-language learners. To provide practical advice for teachers of young learners, this article describes the ways that picturebooks promote listening and speaking and develops criteria to select picturebooks for English instruction focusing on the acoustic elements of language.

  13. Original sound compositions reduce anxiety in emergency department patients: a randomised controlled trial.

    Science.gov (United States)

    Weiland, Tracey J; Jelinek, George A; Macarow, Keely E; Samartzis, Philip; Brown, David M; Grierson, Elizabeth M; Winter, Craig

    2011-12-19

    To determine whether emergency department (ED) patients' self-rated levels of anxiety are affected by exposure to purpose-designed music or sound compositions with and without the audio frequencies of embedded binaural beat. Randomised controlled trial in an ED between 1 February 2010 and 14 April 2010 among a convenience sample of adult patients who were rated as category 3 on the Australasian Triage Scale. All interventions involved listening to soundtracks of 20 minutes' duration that were purpose-designed by composers and sound-recording artists. Participants were allocated at random to one of five groups: headphones and iPod only, no soundtrack (control group); reconstructed ambient noise simulating an ED but free of clear verbalisations; electroacoustic musical composition; composed non-musical soundtracks derived from audio field recordings obtained from natural and constructed settings; sound composition of audio field recordings with embedded binaural beat. All soundtracks were presented on an iPod through headphones. Patients and researchers were blinded to allocation until interventions were administered. State-trait anxiety was self-assessed before the intervention and state anxiety was self-assessed again 20 minutes after the provision of the soundtrack. Spielberger State-Trait Anxiety Inventory. Of 291 patients assessed for eligibility, 170 patients completed the pre-intervention anxiety self-assessment and 169 completed the post-intervention assessment. Significant decreases (all P beats (43; 37) when compared with those allocated to receive simulated ED ambient noise (40; 41) or headphones only (44; 44). In moderately anxious ED patients, state anxiety was reduced by 10%-15% following exposure to purpose-designed sound interventions. Australian New Zealand Clinical Trials Registry ACTRN 12608000444381.

  14. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  15. Brain dynamics that correlate with effects of learning on auditory distance perception

    Directory of Open Access Journals (Sweden)

    Matthew G. Wisniewski

    2014-12-01

    Full Text Available Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m and far (30-m distances. Listeners’ accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC processes identified in electroencephalographic (EEG data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS. The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  16. Cultural Conceptualisations in Learning English as an L2: Examples from Persian-Speaking Learners

    Science.gov (United States)

    Sharifian, Farzad

    2013-01-01

    Traditionally, many studies of second language acquisition (SLA) were based on the assumption that learning a new language mainly involves learning a set of grammatical rules, lexical items, and certain new sounds and sound combinations. However, for many second language learners, learning a second language may involve contact and interactions…

  17. Brain regions for sound processing and song release in a small grasshopper.

    Science.gov (United States)

    Balvantray Bhavsar, Mit; Stumpner, Andreas; Heinrich, Ralf

    2017-05-01

    We investigated brain regions - mostly neuropils - that process auditory information relevant for the initiation of response songs of female grasshoppers Chorthippus biguttulus during bidirectional intraspecific acoustic communication. Male-female acoustic duets in the species Ch. biguttulus require the perception of sounds, their recognition as a species- and gender-specific signal and the initiation of commands that activate thoracic pattern generating circuits to drive the sound-producing stridulatory movements of the hind legs. To study sensory-to-motor processing during acoustic communication we used multielectrodes that allowed simultaneous recordings of acoustically stimulated electrical activity from several ascending auditory interneurons or local brain neurons and subsequent electrical stimulation of the recording site. Auditory activity was detected in the lateral protocerebrum (where most of the described ascending auditory interneurons terminate), in the superior medial protocerebrum and in the central complex, that has previously been implicated in the control of sound production. Neural responses to behaviorally attractive sound stimuli showed no or only poor correlation with behavioral responses. Current injections into the lateral protocerebrum, the central complex and the deuto-/tritocerebrum (close to the cerebro-cervical fascicles), but not into the superior medial protocerebrum, elicited species-typical stridulation with high success rate. Latencies and numbers of phrases produced by electrical stimulation were different between these brain regions. Our results indicate three brain regions (likely neuropils) where auditory activity can be detected with two of these regions being potentially involved in song initiation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Persuasive Mobile Device Sound Sensing in a Classroom Setting

    Directory of Open Access Journals (Sweden)

    Arttu Perttula

    2013-01-01

    Full Text Available This paper presents an idea on how to utilize mobile phones to support learning in the classroom. The paper also tries to initiate discussion on whether we can create new kinds of learning applications using mobile devices and whether this could be the way we should proceed in developing 21st century learning applications. In this study, a mobile phone is programmed to function as a collective sound sensor. To achieve an appropriate learning atmosphere, the designed system attempts to maintain the noise level at a comfortable tolerance level in the classroom. The main aim of the mobile application is to change student behaviour through persuasive visualizations. The prototype application was piloted during spring 2012 with a total of 72 students and two teachers. The results, based on observations and interviews, are promising and several subjects for future work arose during the pilot study.

  19. A study on the sound quality evaluation model of mechanical air-cleaners

    DEFF Research Database (Denmark)

    Ih, Jeong-Guon; Jang, Su-Won; Jeong, Cheol-Ho

    2009-01-01

    In operating the air-cleaner for a long time, people in a quiet enclosed space expect low sound at low operational levels for a routine cleaning of air. However, in the condition of high operational levels of the cleaner, a powerful yet nonannoying sound is desired, which is connected to a feeling...... of an immediate cleaning of pollutants. In this context, it is important to evaluate and design the air-cleaner noise to satisfy such contradictory expectations from the customers. In this study, a model for evaluating the sound quality of air-cleaners of mechanical type was developed based on objective...... and subjective analyses. Sound signals from various aircleaners were recorded and they were edited by increasing or decreasing the loudness at three wide specific-loudness bands: 20-400 Hz (0-3.8 barks), 400-1250 Hz (3.8-10 barks), and 1.25- 12.5 kHz bands (10-22.8 barks). Subjective tests using the edited...

  20. Spontaneous brain activity predicts learning ability of foreign sounds.

    Science.gov (United States)

    Ventura-Campos, Noelia; Sanjuán, Ana; González, Julio; Palomar-García, María-Ángeles; Rodríguez-Pujadas, Aina; Sebastián-Gallés, Núria; Deco, Gustavo; Ávila, César

    2013-05-29

    Can learning capacity of the human brain be predicted from initial spontaneous functional connectivity (FC) between brain areas involved in a task? We combined task-related functional magnetic resonance imaging (fMRI) and resting-state fMRI (rs-fMRI) before and after training with a Hindi dental-retroflex nonnative contrast. Previous fMRI results were replicated, demonstrating that this learning recruited the left insula/frontal operculum and the left superior parietal lobe, among other areas of the brain. Crucially, resting-state FC (rs-FC) between these two areas at pretraining predicted individual differences in learning outcomes after distributed (Experiment 1) and intensive training (Experiment 2). Furthermore, this rs-FC was reduced at posttraining, a change that may also account for learning. Finally, resting-state network analyses showed that the mechanism underlying this reduction of rs-FC was mainly a transfer in intrinsic activity of the left frontal operculum/anterior insula from the left frontoparietal network to the salience network. Thus, rs-FC may contribute to predict learning ability and to understand how learning modifies the functioning of the brain. The discovery of this correspondence between initial spontaneous brain activity in task-related areas and posttraining performance opens new avenues to find predictors of learning capacities in the brain using task-related fMRI and rs-fMRI combined.

  1. Learning while Babbling: Prelinguistic Object-Directed Vocalizations Indicate a Readiness to Learn

    Science.gov (United States)

    Goldstein, Michael H.; Schwade, Jennifer; Briesch, Jacquelyn; Syal, Supriya

    2010-01-01

    Two studies illustrate the functional significance of a new category of prelinguistic vocalizing--object-directed vocalizations (ODVs)--and show that these sounds are connected to learning about words and objects. Experiment 1 tested 12-month-old infants' perceptual learning of objects that elicited ODVs. Fourteen infants' vocalizations were…

  2. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  3. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  4. Effects of temperature on sound production and auditory abilities in the Striped Raphael catfish Platydoras armatulus (Family Doradidae.

    Directory of Open Access Journals (Sweden)

    Sandra Papes

    Full Text Available Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus.Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3-5 ms. The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change.These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in

  5. International perception of lung sounds: a comparison of classification across some European borders

    OpenAIRE

    Aviles Solis, Juan Carlos; Vanbelle, Sophie; Halvorsen, Peder Andreas; Francis, Nick; Cals, Jochem W L; Andreeva, Elena A; Marques, Alda; Piirila, Paivi; Pasterkamp, Hans; Melbye, Hasse

    2017-01-01

    Source at http://dx.doi.org/10.1136/bmjresp-2017-000250 Introduction: Lung auscultation is helpful in the diagnosis of lung and heart diseases; however, the diagnostic value of lung sounds may be questioned due to interobserver variation. This situation may also impair clinical research in this area to generate evidence-based knowledge about the role that chest auscultation has in a modern clinical setting. The recording and visual display of lung sounds is a method that is both repeatab...

  6. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    considering reverberation time. However, for the three other parameters evaluated (sound pressure level, clarity index and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  7. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, M. C.; Wang, L. M.; Rindel, Jens Holger

    2004-01-01

    time. However, for the three other parameters evaluated (sound-pressure level, clarity index, and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity when using computer......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  8. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  9. Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds

    Directory of Open Access Journals (Sweden)

    Chin-Hsing Chen

    2015-06-01

    Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.

  10. Fast phonetic learning occurs already in 2-to-3-month old infants. An ERP study

    NARCIS (Netherlands)

    Wanrooij, K.; Boersma, P.; van Zuijen, T.L.

    2014-01-01

    An important mechanism for learning speech sounds in the first year of life is ‘distributional learning’, i.e., learning by simply listening to the frequency distributions of the speech sounds in the environment. In the lab, fast distributional learning has been reported for infants in the second

  11. The Single- and Multichannel Audio Recordings Database (SMARD)

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Jesper Rindom; Jensen, Søren Holdt

    2014-01-01

    A new single- and multichannel audio recordings database (SMARD) is presented in this paper. The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four...... different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results....

  12. Clinical Relation Extraction Toward Drug Safety Surveillance Using Electronic Health Record Narratives: Classical Learning Versus Deep Learning.

    Science.gov (United States)

    Munkhdalai, Tsendsuren; Liu, Feifan; Yu, Hong

    2018-04-25

    Medication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data. To unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations. We have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types. Our results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%. It shows that

  13. Atypical pattern of discriminating sound features in adults with Asperger syndrome as reflected by the mismatch negativity.

    Science.gov (United States)

    Kujala, T; Aho, E; Lepistö, T; Jansson-Verkasalo, E; Nieminen-von Wendt, T; von Wendt, L; Näätänen, R

    2007-04-01

    Asperger syndrome, which belongs to the autistic spectrum of disorders, is characterized by deficits of social interaction and abnormal perception, like hypo- or hypersensitivity in reacting to sounds and discriminating certain sound features. We determined auditory feature discrimination in adults with Asperger syndrome with the mismatch negativity (MMN), a neural response which is an index of cortical change detection. We recorded MMN for five different sound features (duration, frequency, intensity, location, and gap). Our results suggest hypersensitive auditory change detection in Asperger syndrome, as reflected in the enhanced MMN for deviant sounds with a gap or shorter duration, and speeded MMN elicitation for frequency changes.

  14. Knowledge about Sounds – Context-Specific Meaning Differently Activates Cortical Hemispheres, Auditory Cortical Fields and Layers in House Mice

    Directory of Open Access Journals (Sweden)

    Diana B. Geissler

    2016-03-01

    Full Text Available Activation of the auditory cortex (AC by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF, the ultrasonic field (UF, the secondary field (AII, and the dorsoposterior field (DP suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers and brains which acquired knowledge via implicit learning (naïve females. In this way, auditory cortical activation discriminates between instinctive (mothers and learned (naïve females cognition.

  15. Sound Exposure of Healthcare Professionals Working with a University Marching Band.

    Science.gov (United States)

    Russell, Jeffrey A; Yamaguchi, Moegi

    2018-01-01

    Music-induced hearing disorders are known to result from exposure to excessive levels of music of different genres. Marching band music, with its heavy emphasis on brass and percussion, is one type that is a likely contributor to music-induced hearing disorders, although specific data on sound pressure levels of marching bands have not been widely studied. Furthermore, if marching band music does lead to music-induced hearing disorders, the musicians may not be the only individuals at risk. Support personnel such as directors, equipment managers, and performing arts healthcare providers may also be exposed to potentially damaging sound pressures. Thus, we sought to explore to what degree healthcare providers receive sound dosages above recommended limits during their work with a marching band. The purpose of this study was to determine the sound exposure of healthcare professionals (specifically, athletic trainers [ATs]) who provide on-site care to a large, well-known university marching band. We hypothesized that sound pressure levels to which these individuals were exposed would exceed the National Institute for Occupational Safety and Health (NIOSH) daily percentage allowance. Descriptive observational study. Eight ATs working with a well-known American university marching band volunteered to wear noise dosimeters. During the marching band season, ATs wore an Etymotic ER-200D dosimeter whenever working with the band at outdoor rehearsals, indoor field house rehearsals, and outdoor performances. The dosimeters recorded dose percent exposure, equivalent continuous sound levels in A-weighted decibels, and duration of exposure. For comparison, a dosimeter also was worn by an AT working in the university's performing arts medicine clinic. Participants did not alter their typical duties during any data collection sessions. Sound data were collected with the dosimeters set at the NIOSH standards of 85 dBA threshold and 3 dBA exchange rate; the NIOSH 100% daily dose is

  16. Sound production in Japanese medaka (Oryzias latipes) and its alteration by exposure to aldicarb and copper sulfate.

    Science.gov (United States)

    Kang, Ik Joon; Qiu, Xuchun; Moroishi, Junya; Oshima, Yuji

    2017-08-01

    This study is the first to report sound production in Japanese medaka (Oryzias latipes). Sound production was affected by exposure to the carbamate insecticide (aldicarb) and heavy-metal compound (copper sulfate). Medaka were exposed at four concentrations (aldicarb: 0, 0.25, 0.5, and 1 mg L -1 ; copper sulfate: 0, 0.5, 1, and 2 mg L -1 ), and sound characteristics were monitored for 5 h after exposure. We observed constant average interpulse intervals (approx 0.2 s) in all test groups before exposure, and in the control groups throughout the experiment. The average interpulse interval became significantly longer during the recording periods after 50 min of exposure to aldicarb, and reached a length of more than 0.3 s during the recording periods after 120 min exposure. Most medaka fish stopped to produce sound after 50 min of exposure to copper sulfate at 1 and 2 mg L -1 , resulting in significantly declined number of sound pulses and pulse groups. Relative shortened interpulse intervals of sound were occasionally observed in medaka fish exposed to 0.5 mg L -1 copper sulfate. These alternations in sound characteristics due to toxicants exposure suggested that they might impair acoustic communication of medaka fish, which may be important for their reproduction and survival. Our results suggested that using acoustic changes of medaka has potential to monitor precipitate water pollutions, such as intentional poisoning or accidental leakage of industrial waste. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Characterization of sound emitted by wind machines used for frost control

    Energy Technology Data Exchange (ETDEWEB)

    Gambino, V.; Gambino, T. [Aercoustics Engineering Ltd., Toronto, ON (Canada); Fraser, H.W. [Ontario Ministry of Agriculture, Food and Rural Affairs, Vineland, ON (Canada)

    2007-07-01

    Wind machines are used in Niagara-on-the-Lake to protect cold-sensitive crops against cold injury during winter's extreme cold temperatures,spring's late frosts and autumn's early frosts. The number of wind machines in Ontario has about doubled annually from only a few in the late 1990's, to more than 425 in 2006. They are not used for generating power. Noise complaints have multiplied as the number of wind machines has increased. The objective of this study was to characterize the sound produced by wind machines; learn why residents are annoyed by wind machine noise; and suggest ways to possibly reduce sound emissions. One part of the study explored acoustic emission characteristics, the sonic differences of units made by different manufacturers, sound propagation properties under typical use atmospheric conditions and low frequency noise impact potential. Tests were conducted with a calibrated Larson Davis 2900B portable spectrum analyzer. Sound was measured with a microphone whose frequency response covered the range 4 Hz to 20 kHz. The study examined and found several unique acoustic properties that are characteristic of wind machines. It was determined that noise from wind machines is due to both aerodynamic and mechanical effects, but aerodynamic sounds were found to be the most significant. It was concluded that full range or broadband sounds manifest themselves as noise components that extend throughout the audible frequency range from the bladepass frequency to upwards of 1000 Hz. The sound spectrum of a wind machine is full natural tones and impulses that give it a readily identifiable acoustic character. Atmospheric conditions including temperature, lapse rate, relative humidity, mild winds, gradients and atmospheric turbulence all play a significant role in the long range outdoor propagation of sound from wind machines. 6 refs., 6 figs.

  18. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  19. Privacy Act System of Records: EPA Telecommunications Detail Records, EPA-32

    Science.gov (United States)

    Learn more about the EPA Telecommunications Detail Records System, including who is covered in the system, the purpose of data collection, routine uses for the system's records, and other security procedures.

  20. 33 CFR 167.1702 - In Prince William Sound: Prince William Sound Traffic Separation Scheme.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false In Prince William Sound: Prince William Sound Traffic Separation Scheme. 167.1702 Section 167.1702 Navigation and Navigable Waters COAST....1702 In Prince William Sound: Prince William Sound Traffic Separation Scheme. The Prince William Sound...

  1. Phonemic versus allophonic status modulates early brain responses to language sounds: an MEG/ERF study

    DEFF Research Database (Denmark)

    Nielsen, Andreas Højlund; Gebauer, Line; Mcgregor, William

    allophonic sound contrasts. So far this has only been attested between languages. In the present study we wished to investigate this effect within the same language: Does the same sound contrast that is phonemic in one environment, but allophonic in another, elicit different MMNm responses in native...... ‘that’). This allowed us to manipulate the phonemic/allophonic status of exactly the same sound contrast (/t/-/d/) by presenting it in different immediate phonetic contexts (preceding a vowel (CV) versus following a vowel (VC)), in order to investigate the auditory event-related fields of native Danish...... listeners to a sound contrast that is both phonemic and allophonic within Danish. Methods: Relevant syllables were recorded by a male native Danish speaker. The stimuli were then created by cross-splicing the sounds so that the same vowel [æ] was used for all syllables, and the same [t] and [d] were used...

  2. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  3. Early Morphology and Recurring Sound Patterns

    DEFF Research Database (Denmark)

    Kjærbæk, Laila; Basbøll, Hans; Lambertsen, Claus

    Corpus is a longitudinal corpus of spontaneous Child Speech and Child Directed Speech recorded in the children's homes in interaction with their parents or caretaker and transcribed in CHILDES (MacWhinney 2007 a, b), supplemented by parts of Kim Plunkett's Danish corpus (CHILDES) (Plunkett 1985, 1986...... in creating the typologically characteristic syllable structure of Danish with extreme sound reductions (Rischel 2003, Basbøll 2005) presenting a challenge to the language acquiring child (Bleses & Basbøll 2004). Building upon the Danish CDI-studies as well as on the Odense Twin Corpus and experimental data...

  4. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  5. Biography, Identity, Improvisation, Sound: Intersections of Personal and Social Identity through Improvisation

    Science.gov (United States)

    Smilde, Rineke

    2016-01-01

    This essay addresses the relationship of improvisation and identity. Biographical research that was conducted by the author into professional musicians' lifelong learning showed the huge importance of improvisation for personal expression. Musically, the concept of "sound" appeared to serve as a strong metaphor for identity. In addition,…

  6. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Month of Conception and Learning Disabilities: A Record-Linkage Study of 801,592 Children.

    Science.gov (United States)

    Mackay, Daniel F; Smith, Gordon C S; Cooper, Sally-Ann; Wood, Rachael; King, Albert; Clark, David N; Pell, Jill P

    2016-10-01

    Learning disabilities have profound, long-lasting health sequelae. Affected children born over the course of 1 year in the United States of America generated an estimated lifetime cost of $51.2 billion. Results from some studies have suggested that autistic spectrum disorder may vary by season of birth, but there have been few studies in which investigators examined whether this is also true of other causes of learning disabilities. We undertook Scotland-wide record linkage of education (annual pupil census) and maternity (Scottish Morbidity Record 02) databases for 801,592 singleton children attending Scottish schools in 2006-2011. We modeled monthly rates using principal sine and cosine transformations of the month number and demonstrated cyclicity in the percentage of children with special educational needs. Rates were highest among children conceived in the first quarter of the year (January-March) and lowest among those conceived in the third (July-September) (8.9% vs 7.6%; P disabilities, and learning difficulties (e.g., dyslexia) and were absent for sensory or motor/physical impairments and mental, physical, or communication problems. Seasonality accounted for 11.4% (95% confidence interval: 9.0, 13.7) of all cases. Some biologically plausible causes of this variation, such as infection and maternal vitamin D levels, are potentially amendable to intervention. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Universal mechanisms of sound production and control in birds and mammals

    DEFF Research Database (Denmark)

    Elemans, Coen; Rasmussen, Jeppe Have; Herbst, Christian T.

    2015-01-01

    As animals vocalize, their vocal organ transforms motor commands into vocalizations for social communication. In birds, the physical mechanisms by which vocalizations are produced and controlled remain unresolved because of the extreme difficulty in obtaining in vivo measurements. Here, we...... learning and is common to MEAD sound production across birds and mammals, including humans....

  9. Can road traffic mask sound from wind turbines? Response to wind turbine sound at different levels of road traffic sound

    International Nuclear Information System (INIS)

    Pedersen, Eja; Berg, Frits van den; Bakker, Roel; Bouma, Jelte

    2010-01-01

    Wind turbines are favoured in the switch-over to renewable energy. Suitable sites for further developments could be difficult to find as the sound emitted from the rotor blades calls for a sufficient distance to residents to avoid negative effects. The aim of this study was to explore if road traffic sound could mask wind turbine sound or, in contrast, increases annoyance due to wind turbine noise. Annoyance of road traffic and wind turbine noise was measured in the WINDFARMperception survey in the Netherlands in 2007 (n=725) and related to calculated levels of sound. The presence of road traffic sound did not in general decrease annoyance with wind turbine noise, except when levels of wind turbine sound were moderate (35-40 dB(A) Lden) and road traffic sound level exceeded that level with at least 20 dB(A). Annoyance with both noises was intercorrelated but this correlation was probably due to the influence of individual factors. Furthermore, visibility and attitude towards wind turbines were significantly related to noise annoyance of modern wind turbines. The results can be used for the selection of suitable sites, possibly favouring already noise exposed areas if wind turbine sound levels are sufficiently low.

  10. A comprehensive account of sound sequence imitation in the songbird.

    Directory of Open Access Journals (Sweden)

    Maren Westkott

    2016-07-01

    Full Text Available The amazing imitation capabilities of songbirds show that they can memorize sensory sequences and transform them into motor activities which in turn generate the original sound sequences. This suggests that the bird's brain can learn 1. to reliably reproduce spatio-temporal sensory representations and 2. to transform them into corresponding spatio-temporal motor activations by using an inverse mapping. Neither the synaptic mechanisms nor the network architecture enabling these two fundamental aspects of imitation learning are known. We propose an architecture of coupled neuronal modules that mimick areas in the song bird and show that a unique synaptic plasticity mechanism can serve to learn both, sensory sequences in a recurrent neuronal network, as well as an inverse model that transforms the sensory memories into the corresponding motor activations. The proposed membrane potential dependent learning rule together with the architecture that includes basic features of the bird's brain represents the first comprehensive account of bird imitation learning based on spiking neurons.

  11. Structure-borne sound structural vibrations and sound radiation at audio frequencies

    CERN Document Server

    Cremer, L; Petersson, Björn AT

    2005-01-01

    Structure-Borne Sound"" is a thorough introduction to structural vibrations with emphasis on audio frequencies and the associated radiation of sound. The book presents in-depth discussions of fundamental principles and basic problems, in order to enable the reader to understand and solve his own problems. It includes chapters dealing with measurement and generation of vibrations and sound, various types of structural wave motion, structural damping and its effects, impedances and vibration responses of the important types of structures, as well as with attenuation of vibrations, and sound radi

  12. Making magnetic recording commercial: 1920-1955

    Science.gov (United States)

    Clark, Mark H.

    1999-03-01

    Although magnetic recording had been invented in 1898, it was not until the late 1920s that the technology was successfully marketed to the public. Firms in Germany, the United Kingdom, and the United States developed and sold magnetic recorders for specialized markets in broadcasting and telephone systems through the 1930s. The demands of World War II considerably expanded the use of magnetic recording, and with the end of the war, firms in the United States sought to bring magnetic recording to home and professional music recording. Using a combination of captured German technology and American wartime research, American companies such as Ampex, Magnecord, 3M, the Brush Development Company, and others created a vast new industry. By the mid-1950s, magnetic recording was firmly established as a method for recording both sound and data.

  13. Is 1/f sound more effective than simple resting in reducing stress response?

    Science.gov (United States)

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  14. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  15. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  16. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  17. Control of Toxic Chemicals in Puget Sound, Phase 3: Study of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    Energy Technology Data Exchange (ETDEWEB)

    Brandenberger, Jill M.; Louchouarn, Patrick; Kuo, Li-Jung; Crecelius, Eric A.; Cullinan, Valerie I.; Gill, Gary A.; Garland, Charity R.; Williamson, J. B.; Dhammapala, R.

    2010-07-05

    The results of the Phase 1 Toxics Loading study suggested that runoff from the land surface and atmospheric deposition directly to marine waters have resulted in considerable loads of contaminants to Puget Sound (Hart Crowser et al. 2007). The limited data available for atmospheric deposition fluxes throughout Puget Sound was recognized as a significant data gap. Therefore, this study provided more recent or first reported atmospheric deposition fluxes of PAHs, PBDEs, and select trace elements for Puget Sound. Samples representing bulk atmospheric deposition were collected during 2008 and 2009 at seven stations around Puget Sound spanning from Padilla Bay south to Nisqually River including Hood Canal and the Straits of Juan de Fuca. Revised annual loading estimates for atmospheric deposition to the waters of Puget Sound were calculated for each of the toxics and demonstrated an overall decrease in the atmospheric loading estimates except for polybrominated diphenyl ethers (PBDEs) and total mercury (THg). The median atmospheric deposition flux of total PBDE (7.0 ng/m2/d) was higher than that of the Hart Crowser (2007) Phase 1 estimate (2.0 ng/m2/d). The THg was not significantly different from the original estimates. The median atmospheric deposition flux for pyrogenic PAHs (34.2 ng/m2/d; without TCB) shows a relatively narrow range across all stations (interquartile range: 21.2- 61.1 ng/m2/d) and shows no influence of season. The highest median fluxes for all parameters were measured at the industrial location in Tacoma and the lowest were recorded at the rural sites in Hood Canal and Sequim Bay. Finally, a semi-quantitative apportionment study permitted a first-order characterization of source inputs to the atmosphere of the Puget Sound. Both biomarker ratios and a principal component analysis confirmed regional data from the Puget Sound and Straits of Georgia region and pointed to the predominance of biomass and fossil fuel (mostly liquid petroleum products such

  18. Effect of the radiofrequency volumetric tissue reduction of inferior turbinate on expiratory nasal sound frequency.

    Science.gov (United States)

    Seren, Erdal

    2009-01-01

    We sought to evaluate the short-term efficacy of radiofrequency volumetric tissue reduction (RFVTR) in treatment of inferior turbinate hypertrophy (TH) as measured by expiratory nasal sound spectra. In our study, we aimed to investigate the Odiosoft-rhino (OR) as a new diagnostic method to evaluate the nasal airflow of patients before and after RFVTR. In this study, we have analyzed and recorded the expiratory nasal sound in patients with inferior TH before and after RFVTR. This analysis includes the time expanded waveform, the spectral analysis with time averaged fast Fourier transform (FFT), and the waveform analysis of nasal sound. We found an increase in sound intensity at high frequency (Hf) in the sound analyses of the patients before RFVTR and a decrease in sound intensity at Hf was found in patients after RFVTR. This study indicates that RFVTR is an effective procedure to improve nasal airflow in the patients with nasal obstruction with inferior TH. We found significant decreases in the sound intensity level at Hf in the sound spectra after RFVTR. The OR results from the 2000- to 4000-Hz frequency (Hf) interval may be more useful in assessing patients with nasal obstruction than other frequency intervals. OR may be used as a noninvasive diagnostic tool to evaluate the nasal airflow.

  19. Vibration analysis and sound field characteristics of a tubular ultrasonic radiator.

    Science.gov (United States)

    Liang, Zhaofeng; Zhou, Guangping; Zhang, Yihui; Li, Zhengzhong; Lin, Shuyu

    2006-12-01

    A sort of tubular ultrasonic radiator used in ultrasonic liquid processing is studied. The frequency equation of the tubular radiator is derived, and its radiated sound field in cylindrical reactor is calculated using finite element method and recorded by means of aluminum foil erosion. The results indicate that sound field of tubular ultrasonic radiator in cylindrical reactor appears standing waves along both its radial direction and axial direction, and amplitudes of standing waves decrease gradually along its radial direction, and the numbers of standing waves along its axial direction are equal to the axial wave numbers of tubular radiator. The experimental results are in good agreement with calculated results.

  20. Performance evaluation of heart sound cancellation in FPGA hardware implementation for electronic stethoscope.

    Science.gov (United States)

    Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian

    2014-01-01

    This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  1. Performance Evaluation of Heart Sound Cancellation in FPGA Hardware Implementation for Electronic Stethoscope

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2014-01-01

    Full Text Available This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs. The adaptive line enhancer (ALE was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II–EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.

  2. Crossmodal Perceptual Learning and Sensory Substitution

    Directory of Open Access Journals (Sweden)

    Michael J Proulx

    2011-10-01

    Full Text Available A sensory substitution device for blind persons aims to provide the missing visual input by converting images into a form that another modality can perceive, such as sound. Here I will discuss the perceptual learning and attentional mechanisms necessary for interpreting sounds produced by a device (The vOICe in a visuospatial manner. Although some aspects of the conversion, such as relating vertical location to pitch, rely on natural crossmodal mappings, the extensive training required suggests that synthetic mappings are required to generalize perceptual learning to new objects and environments, and ultimately to experience visual qualia. Here I will discuss the effects of the conversion and training on perception and attention that demonstrate the synthetic nature of learning the crossmodal mapping. Sensorimotor experience may be required to facilitate learning, develop expertise, and to develop a form of synthetic synaesthesia.

  3. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  4. Can infants learn phonology in the lab? A meta-analytic answer.

    Science.gov (United States)

    Cristia, Alejandrina

    2018-01-01

    Two of the key tasks facing the language-learning infant lie at the level of phonology: establishing which sounds are contrastive in the native inventory, and determining what their possible syllabic positions and permissible combinations (phonotactics) are. In 2002-2003, two theoretical proposals, one bearing on how infants can learn sounds (Maye, Werker, & Gerken, 2002) and the other on phonotactics (Chambers, Onishi, & Fisher, 2003), were put forward on the pages of Cognition, each supported by two laboratory experiments, wherein a group of infants was briefly exposed to a set of pseudo-words, and plausible phonological generalizations were tested subsequently. These two papers have received considerable attention from the general scientific community, and inspired a flurry of follow-up work. In the context of questions regarding the replicability of psychological science, the present work uses a meta-analytic approach to appraise extant empirical evidence for infant phonological learning in the laboratory. It is found that neither seminal finding (on learning sounds and learning phonotactics) holds up when close methodological replications are integrated, although less close methodological replications do provide some evidence in favor of the sound learning strand of work. Implications for authors and readers of this literature are drawn out. It would be desirable that additional mechanisms for phonological learning be explored, and that future infant laboratory work employ paradigms that rely on constrained and unambiguous links between experimental exposure and measured infant behavior. Copyright © 2017. Published by Elsevier B.V.

  5. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  6. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  7. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  8. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  9. Brainstem auditory evoked potentials in healthy cats recorded with surface electrodes

    Directory of Open Access Journals (Sweden)

    Mihai Musteata

    2013-01-01

    Full Text Available The aim of this study was to evaluate the brainstem auditory evoked potentials of seven healthy cats, using surface electrodes. Latencies of waves I, III and V, and intervals I–III, I–V and III–V were recorded. Monaural and binaural stimulation of the cats were done with sounds ranging between 40 and 90 decibel Sound Pressure Level. All latencies were lower than those described in previous studies, where needle electrodes were used. In the case of binaural stimulation, latencies of waves III and V were greater compared to those obtained for monaural stimulation (P P > 0.05. Regardless of the sound intensity, the interwave latency was constant (P > 0.05. Interestingly, no differences were noticed for latencies of waves III and V when sound intensity was higher than 80dB SPL. This study completes the knowledge in the field of electrophysiology and shows that the brainstem auditory evoked potentials in cats using surface electrodes is a viable method to record the transmission of auditory information. That can be faithfully used in clinical practice, when small changes of latency values may be an objective factor in health status evaluation.

  10. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  11. Presentations and recorded keynotes of the First European Workshop on Latent Semantic Analysis in Technology Enhanced Learning

    NARCIS (Netherlands)

    Several

    2007-01-01

    Presentations and recorded keynotes at the 1st European Workshop on Latent Semantic Analysis in Technology-Enhanced Learning, March, 29-30, 2007. Heerlen, The Netherlands: The Open University of the Netherlands. Please see the conference website for more information:

  12. Perceptual learning of acoustic noise generates memory-evoked potentials.

    Science.gov (United States)

    Andrillon, Thomas; Kouider, Sid; Agus, Trevor; Pressnitzer, Daniel

    2015-11-02

    Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. On the use of binaural recordings for dynamic binaural reproduction

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Christensen, Flemming

    2011-01-01

    Binaural recordings are considered applicable only for static binaural reproduction. That is, playback of binaural recordings can only reproduce the sound field captured for the fixed position and orientation of the recording head. However, given some conditions it is possible to use binaural...... recordings for the reproduction of binaural signals that change according to the listener actions, i.e. dynamic binaural reproduction. Here we examine the conditions that allow for such dynamic recording/playback configuration and discuss advantages and disadvantages. Analysis and discussion focus on two...

  14. Recording single neurons' action potentials from freely moving pigeons across three stages of learning.

    Science.gov (United States)

    Starosta, Sarah; Stüttgen, Maik C; Güntürkün, Onur

    2014-06-02

    While the subject of learning has attracted immense interest from both behavioral and neural scientists, only relatively few investigators have observed single-neuron activity while animals are acquiring an operantly conditioned response, or when that response is extinguished. But even in these cases, observation periods usually encompass only a single stage of learning, i.e. acquisition or extinction, but not both (exceptions include protocols employing reversal learning; see Bingman et al.(1) for an example). However, acquisition and extinction entail different learning mechanisms and are therefore expected to be accompanied by different types and/or loci of neural plasticity. Accordingly, we developed a behavioral paradigm which institutes three stages of learning in a single behavioral session and which is well suited for the simultaneous recording of single neurons' action potentials. Animals are trained on a single-interval forced choice task which requires mapping each of two possible choice responses to the presentation of different novel visual stimuli (acquisition). After having reached a predefined performance criterion, one of the two choice responses is no longer reinforced (extinction). Following a certain decrement in performance level, correct responses are reinforced again (reacquisition). By using a new set of stimuli in every session, animals can undergo the acquisition-extinction-reacquisition process repeatedly. Because all three stages of learning occur in a single behavioral session, the paradigm is ideal for the simultaneous observation of the spiking output of multiple single neurons. We use pigeons as model systems, but the task can easily be adapted to any other species capable of conditioned discrimination learning.

  15. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  16. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  17. Sound and speech detection and classification in a Health Smart Home.

    Science.gov (United States)

    Fleury, A; Noury, N; Vacher, M; Glasson, H; Seri, J F

    2008-01-01

    Improvements in medicine increase life expectancy in the world and create a new bottleneck at the entrance of specialized and equipped institutions. To allow elderly people to stay at home, researchers work on ways to monitor them in their own environment, with non-invasive sensors. To meet this goal, smart homes, equipped with lots of sensors, deliver information on the activities of the person and can help detect distress situations. In this paper, we present a global speech and sound recognition system that can be set-up in a flat. We placed eight microphones in the Health Smart Home of Grenoble (a real living flat of 47m(2)) and we automatically analyze and sort out the different sounds recorded in the flat and the speech uttered (to detect normal or distress french sentences). We introduce the methods for the sound and speech recognition, the post-processing of the data and finally the experimental results obtained in real conditions in the flat.

  18. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  19. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  20. Completely reproducible description of digital sound data with cellular automata

    International Nuclear Information System (INIS)

    Wada, Masato; Kuroiwa, Jousuke; Nara, Shigetoshi

    2002-01-01

    A novel method of compressive and completely reproducible description of digital sound data by means of rule dynamics of CA (cellular automata) is proposed. The digital data of spoken words and music recorded with the standard format of a compact disk are reproduced completely by this method with use of only two rules in a one-dimensional CA without loss of information

  1. Towards a more sonically inclusive museum practice: a new definition of the ‘sound object’

    Directory of Open Access Journals (Sweden)

    John Kannenberg

    2017-11-01

    Full Text Available As museums continue to search for new ways to attract visitors, recent trends within museum practice have focused on providing audiences with multisensory experiences. Books such as 2014’s The Multisensory Museum present preliminary strategies by which museums might help visitors engage with collections using senses beyond the visual. In this article, an overview of the multisensory roots of museum display and an exploration of the shifting definition of ‘object’ leads to a discussion of Pierre Schaeffer’s musical term objet sonore – the ‘sound object’, which has traditionally stood for recorded sounds on magnetic tape used as source material for electroacoustic musical composition. A problematic term within sound studies, this article proposes a revised definition of ‘sound object’, shifting it from experimental music into the realm of the author’s own experimental curatorial practice of establishing The Museum of Portable Sound, an institution dedicated to the collection and display of sounds as cultural objects. Utilising Brian Kane’s critique of Schaeffer, Christoph Cox and Casey O’Callaghan’s thoughts on sonic materialism, Dan Novak and Matt Sakakeeny’s anthropological approach to sound theory, and art historian Alexander Nagel’s thoughts on the origins of art forgery, this article presents a new working definition of the sound object as a museological (rather than a musical concept.

  2. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines.

    Science.gov (United States)

    Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.

  3. To call a cloud 'cirrus': sound symbolism in names for categories or items.

    Science.gov (United States)

    Ković, Vanja; Sučević, Jelena; Styles, Suzy J

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.

  4. Perceptual statistical learning over one week in child speech production.

    Science.gov (United States)

    Richtsmeier, Peter T; Goffman, Lisa

    2017-07-01

    What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Long-term evolution of brainstem electrical evoked responses to sound after restricted ablation of the auditory cortex.

    Directory of Open Access Journals (Sweden)

    Verónica Lamas

    Full Text Available INTRODUCTION: This study aimed to assess the top-down control of sound processing in the auditory brainstem of rats. Short latency evoked responses were analyzed after unilateral or bilateral ablation of auditory cortex. This experimental paradigm was also used towards analyzing the long-term evolution of post-lesion plasticity in the auditory system and its ability to self-repair. METHOD: Auditory cortex lesions were performed in rats by stereotactically guided fine-needle aspiration of the cerebrocortical surface. Auditory Brainstem Responses (ABR were recorded at post-surgery day (PSD 1, 7, 15 and 30. Recordings were performed under closed-field conditions, using click trains at different sound intensity levels, followed by statistical analysis of threshold values and ABR amplitude and latency variables. Subsequently, brains were sectioned and immunostained for GAD and parvalbumin to assess the location and extent of lesions accurately. RESULTS: Alterations in ABR variables depended on the type of lesion and post-surgery time of ABR recordings. Accordingly, bilateral ablations caused a statistically significant increase in thresholds at PSD1 and 7 and a decrease in waves amplitudes at PSD1 that recover at PSD7. No effects on latency were noted at PSD1 and 7, whilst recordings at PSD15 and 30 showed statistically significant decreases in latency. Conversely, unilateral ablations had no effect on auditory thresholds or latencies, while wave amplitudes only decreased at PSD1 strictly in the ipsilateral ear. CONCLUSION: Post-lesion plasticity in the auditory system acts in two time periods: short-term period of decreased sound sensitivity (until PSD7, most likely resulting from axonal degeneration; and a long-term period (up to PSD7, with changes in latency responses and recovery of thresholds and amplitudes values. The cerebral cortex may have a net positive gain on the auditory pathway response to sound.

  6. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds

    DEFF Research Database (Denmark)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin

    2017-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound......-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound...... from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect...

  7. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  8. The din of gunfire: Rethinking the role of sound in World War II newsreels

    Directory of Open Access Journals (Sweden)

    Masha Shpolberg

    2014-12-01

    Full Text Available French film historian Laurent Véray has famously called World War I ‘the first media war of the twentieth century’. Newsreels, which first appeared in 1910, brought the war to movie theaters across Europe and the U.S., screening combat for those on the ‘home front’. However, while the audience could see the action it could not hear it – sometimes only live music would accompany the movements of the troops. The arrival of sound newsreels in 1929 radically transformed moviegoers’ experiences of the news, and, by necessity, of armed conflict. Drawing on examples of World War II newsreels from British Pathé’s archive that was recently made available online, this article seeks to delineate the logic governing the combination of voice-over commentary, music, sound effects, and field-recorded sound, and argues that it can be traced directly to the treatment of sound in the ‘Great War’ fiction films of the preceding decade.

  9. Conducting Expressively: Navigating Seven Misconceptions That Inhibit Meaningful Connection to Ensemble and Sound

    Science.gov (United States)

    Snyder, Courtney

    2016-01-01

    When expressivity (ignited by imagination) is incorporated into the learning process for both the conductor (teacher) and player (student), the qualities of movement, communication, instruction, and ensemble sound all change for the better, often with less work. Expressive conducting allows the conductor to feel more connected to the music and the…

  10. A comparison between swallowing sounds and vibrations in patients with dysphagia

    Science.gov (United States)

    Movahedi, Faezeh; Kurosu, Atsuko; Coyle, James L.; Perera, Subashan

    2017-01-01

    The cervical auscultation refers to the observation and analysis of sounds or vibrations captured during swallowing using either a stethoscope or acoustic/vibratory detectors. Microphones and accelerometers have recently become two common sensors used in modern cervical auscultation methods. There are open questions about whether swallowing signals recorded by these two sensors provide unique or complementary information about swallowing function; or whether they present interchangeable information. The aim of this study is to present a broad comparison of swallowing signals recorded by a microphone and a tri-axial accelerometer from 72 patients (mean age 63.94 ± 12.58 years, 42 male, 30 female), who underwent videofluoroscopic examination. The participants swallowed one or more boluses of thickened liquids of different consistencies, including thin liquids, nectar-thick liquids, and pudding. A comfortable self-selected volume from a cup or a controlled volume by the examiner from a 5ml spoon was given to the participants. A comprehensive set of features was extracted in time, information-theoretic, and frequency domains from each of 881 swallows presented in this study. The swallowing sounds exhibited significantly higher frequency content and kurtosis values than the swallowing vibrations. In addition, the Lempel-Ziv complexity was lower for swallowing sounds than those for swallowing vibrations. To conclude, information provided by microphones and accelerometers about swallowing function are unique and these two transducers are not interchangeable. Consequently, the selection of transducer would be a vital step in future studies. PMID:28495001

  11. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  12. DEVELOPMENT OF FILMSTRIP SEQUENCE PHOTOGRAPHS AND SOUND REPRODUCTION OF EDUCATIONAL TELEVISION PRESENTATIONS.

    Science.gov (United States)

    MARTINI, HARRY R.

    BLACK AND WHITE FILMSTRIPS THAT REPRODUCED STILL PICTURES AND SOUND TRACK FROM EDUCATIONAL TELEVISION BROADCASTS WERE USED TO STUDY THE EFFECTIVENESS OF ETV REPRODUCTIONS IN AIDING POOR ACHIEVERS. THE SPECIFIC ADVANTAGE OF SUCH A REPRODUCTION WAS THAT IT COULD BE PACED TO THE LEARNING TEMPO OF THE STUDENTS RATHER THAN USING THE TOO-FAST PACE OF A…

  13. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  14. Behavioural Response Thresholds in New Zealand Crab Megalopae to Ambient Underwater Sound

    Science.gov (United States)

    Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.

    2011-01-01

    A small number of studies have demonstrated that settlement stage decapod crustaceans are able to detect and exhibit swimming, settlement and metamorphosis responses to ambient underwater sound emanating from coastal reefs. However, the intensity of the acoustic cue required to initiate the settlement and metamorphosis response, and therefore the potential range over which this acoustic cue may operate, is not known. The current study determined the behavioural response thresholds of four species of New Zealand brachyuran crab megalopae by exposing them to different intensity levels of broadcast reef sound recorded from their preferred settlement habitat and from an unfavourable settlement habitat. Megalopae of the rocky-reef crab, Leptograpsus variegatus, exhibited the lowest behavioural response threshold (highest sensitivity), with a significant reduction in time to metamorphosis (TTM) when exposed to underwater reef sound with an intensity of 90 dB re 1 µPa and greater (100, 126 and 135 dB re 1 µPa). Megalopae of the mud crab, Austrohelice crassa, which settle in soft sediment habitats, exhibited no response to any of the underwater reef sound levels. All reef associated species exposed to sound levels from an unfavourable settlement habitat showed no significant change in TTM, even at intensities that were similar to their preferred reef sound for which reductions in TTM were observed. These results indicated that megalopae were able to discern and respond selectively to habitat-specific acoustic cues. The settlement and metamorphosis behavioural response thresholds to levels of underwater reef sound determined in the current study of four species of crabs, enables preliminary estimation of the spatial range at which an acoustic settlement cue may be operating, from 5 m to 40 km depending on the species. Overall, these results indicate that underwater sound is likely to play a major role in influencing the spatial patterns of settlement of coastal crab

  15. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  16. Sound symbolism: the role of word sound in meaning.

    Science.gov (United States)

    Svantesson, Jan-Olof

    2017-09-01

    The question whether there is a natural connection between sound and meaning or if they are related only by convention has been debated since antiquity. In linguistics, it is usually taken for granted that 'the linguistic sign is arbitrary,' and exceptions like onomatopoeia have been regarded as marginal phenomena. However, it is becoming more and more clear that motivated relations between sound and meaning are more common and important than has been thought. There is now a large and rapidly growing literature on subjects as ideophones (or expressives), words that describe how a speaker perceives a situation with the senses, and phonaesthemes, units like English gl-, which occur in many words that share a meaning component (in this case 'light': gleam, glitter, etc.). Furthermore, psychological experiments have shown that sound symbolism in one language can be understood by speakers of other languages, suggesting that some kinds of sound symbolism are universal. WIREs Cogn Sci 2017, 8:e1441. doi: 10.1002/wcs.1441 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  17. Noise Source Visualization Using a Digital Voice Recorder and Low-Cost Sensors.

    Science.gov (United States)

    Cho, Yong Thung

    2018-04-03

    Accurate sound visualization of noise sources is required for optimal noise control. Typically, noise measurement systems require microphones, an analog-digital converter, cables, a data acquisition system, etc., which may not be affordable for potential users. Also, many such systems are not highly portable and may not be convenient for travel. Handheld personal electronic devices such as smartphones and digital voice recorders with relatively lower costs and higher performance have become widely available recently. Even though such devices are highly portable, directly implementing them for noise measurement may lead to erroneous results since such equipment was originally designed for voice recording. In this study, external microphones were connected to a digital voice recorder to conduct measurements and the input received was processed for noise visualization. In this way, a low cost, compact sound visualization system was designed and introduced to visualize two actual noise sources for verification with different characteristics: an enclosed loud speaker and a small air compressor. Reasonable accuracy of noise visualization for these two sources was shown over a relatively wide frequency range. This very affordable and compact sound visualization system can be used for many actual noise visualization applications in addition to educational purposes.

  18. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    Directory of Open Access Journals (Sweden)

    Loes J Bolle

    Full Text Available In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2 (zero to peak pressures up to 32 kPa and single pulse sound exposure levels up to 186 dB re 1µPa(2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  19. Privacy Act System of Records: Employee Counseling and Assistance Program Records, EPA-27

    Science.gov (United States)

    Learn about the Employee Counseling and Assistance Program Records System, including who is covered in the system, the purpose of data collection, routine uses for the system's records, and other security procedures.

  20. Sound specificity effects in spoken word recognition: The effect of integrality between words and sounds.

    Science.gov (United States)

    Strori, Dorina; Zaar, Johannes; Cooke, Martin; Mattys, Sven L

    2018-01-01

    Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.

  1. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  2. Distributional learning has immediate and long-lasting effects.

    Science.gov (United States)

    Escudero, Paola; Williams, Daniel

    2014-11-01

    Evidence of distributional learning, a statistical learning mechanism centered on relative frequency of exposure to different tokens, has mainly come from short-term learning and therefore does not ostensibly address the development of important learning processes. The present longitudinal study examines both short- and long-term effects of distributional learning of phonetic categories on non-native sound discrimination over a 12-month period. Two groups of listeners were exposed to a two-minute distribution of auditory stimuli in which the most frequently presented tokens either approximated or exaggerated the natural production of the speech sounds, whereas a control group listened to a piece of classical music for the same length of time. Discrimination by listeners in the two distribution groups improved immediately after the short exposure, replicating previous results. Crucially, this improvement was maintained after six and 12 months, demonstrating that distributional learning has long-lasting effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Multimedia radiology self-learning course on the world wide web

    International Nuclear Information System (INIS)

    Sim, Jung Suk; Kim, Jong Hyo; Kim, Tae Kyoung; Han, Joon Koo; Kang, Heung Sik; Yeon, Kyung Mo; Han, Man Chung

    1997-01-01

    The creation and maintenance of radiology teaching materials is both laborious and very time-consuming, but at a teaching hospital is important. Through use of the technology offered by today's worldwide web, this problem can be efficiently solved, and on this basis, we devised a multimedia radiology self-learning course for abdominal ultrasound and CT. A combination of video and audio tapes has been used as teaching material; the authors digitized and converted these to Hypertext Mark-up Language (HTML) format. films were digitized with a digital camera and compressed to joint photographic expert group (JPEG) format, while audio tapes were digitized with a sound recorder and compressed to real audio format. Multimedia on the worldwide web will facilitate easy management and maintenance of a self-learning course. To make this more suitable for practical use, continual upgrading on the basis of experience is needed. (author). 3 refs., 4 figs

  4. Sound velocity of tantalum under shock compression in the 18–142 GPa range

    Energy Technology Data Exchange (ETDEWEB)

    Xi, Feng, E-mail: xifeng@caep.cn; Jin, Ke; Cai, Lingcang, E-mail: cai-lingcang@aliyun.com; Geng, Huayun; Tan, Ye; Li, Jun [National Key Laboratory of Shock Waves and Detonation Physics, Institute of Fluid Physics, CAEP, P.O. Box 919-102 Mianyang, Sichuan 621999 (China)

    2015-05-14

    Dynamic compression experiments of tantalum (Ta) within a shock pressure range from 18–142 GPa were conducted driven by explosive, a two-stage light gas gun, and a powder gun, respectively. The time-resolved Ta/LiF (lithium fluoride) interface velocity profiles were recorded with a displacement interferometer system for any reflector. Sound velocities of Ta were obtained from the peak state time duration measurements with the step-sample technique and the direct-reverse impact technique. The uncertainty of measured sound velocities were analyzed carefully, which suggests that the symmetrical impact method with step-samples is more accurate for sound velocity measurement, and the most important parameter in this type experiment is the accurate sample/window particle velocity profile, especially the accurate peak state time duration. From these carefully analyzed sound velocity data, no evidence of a phase transition was found up to the shock melting pressure of Ta.

  5. Management implications of broadband sound in modulating wild silver carp (Hypophthalmichthys molitrix) behavior

    Science.gov (United States)

    Vetter, Brooke J.; Calfee, Robin D.; Mensinger, Allen F.

    2017-01-01

    Invasive silver carp (Hypophthalmichthys molitrix) dominate large regions of the Mississippi River drainage, outcompete native species, and are notorious for their prolific and unusual jumping behavior. High densities of juvenile and adult (~25 kg) carp are known to jump up to 3 m above the water surface in response to moving watercraft. Broadband sound recorded from an outboard motor (100 hp at 32 km/hr) can modulate their behavior in captivity; however, the response of wild silver carp to broadband sound has yet to be determined. In this experiment, broadband sound (0.06–10 kHz) elicited jumping behavior from silver carp in the Spoon River near Havana, IL independent of boat movement, indicating acoustic stimulus alone is sufficient to induce jumping. Furthermore, the number of jumping fish decreased with subsequent sound exposures. Understanding silver carp jumping is not only important from a behavioral standpoint, it is also critical to determine effective techniques for controlling this harmful species, such as herding fish into a net for removal.

  6. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  7. Do high sound pressure levels of crowing in roosters necessitate passive mechanisms for protection against self-vocalization?

    Science.gov (United States)

    Claes, Raf; Muyshondt, Pieter G G; Dirckx, Joris J J; Aerts, Peter

    2018-02-01

    High sound pressure levels (>120dB) cause damage or death of the hair cells of the inner ear, hence causing hearing loss. Vocalization differences are present between hens and roosters. Crowing in roosters is reported to produce sound pressure levels of 100dB measured at a distance of 1m. In this study we measured the sound pressure levels that exist at the entrance of the outer ear canal. We hypothesize that roosters may benefit from a passive protective mechanism while hens do not require such a mechanism. Audio recordings at the level of the entrance of the outer ear canal of crowing roosters, made in this study, indeed show that a protective mechanism is needed as sound pressure levels can reach amplitudes of 142.3dB. Audio recordings made at varying distances from the crowing rooster show that at a distance of 0.5m sound pressure levels already drop to 102dB. Micro-CT scans of a rooster and chicken head show that in roosters the auditory canal closes when the beak is opened. In hens the diameter of the auditory canal only narrows but does not close completely. A morphological difference between the sexes in shape of a bursa-like slit which occurs in the outer ear canal causes the outer ear canal to close in roosters but not in hens. Copyright © 2017 Elsevier GmbH. All rights reserved.

  8. Machine Learning Methods to Extract Documentation of Breast Cancer Symptoms From Electronic Health Records.

    Science.gov (United States)

    Forsyth, Alexander W; Barzilay, Regina; Hughes, Kevin S; Lui, Dickson; Lorenz, Karl A; Enzinger, Andrea; Tulsky, James A; Lindvall, Charlotta

    2018-02-27

    Clinicians document cancer patients' symptoms in free-text format within electronic health record visit notes. Although symptoms are critically important to quality of life and often herald clinical status changes, computational methods to assess the trajectory of symptoms over time are woefully underdeveloped. To create machine learning algorithms capable of extracting patient-reported symptoms from free-text electronic health record notes. The data set included 103,564 sentences obtained from the electronic clinical notes of 2695 breast cancer patients receiving paclitaxel-containing chemotherapy at two academic cancer centers between May 1996 and May 2015. We manually annotated 10,000 sentences and trained a conditional random field model to predict words indicating an active symptom (positive label), absence of a symptom (negative label), or no symptom at all (neutral label). Sentences labeled by human coder were divided into training, validation, and test data sets. Final model performance was determined on 20% test data unused in model development or tuning. The final model achieved precision of 0.82, 0.86, and 0.99 and recall of 0.56, 0.69, and 1.00 for positive, negative, and neutral symptom labels, respectively. The most common positive symptoms were pain, fatigue, and nausea. Machine-based labeling of 103,564 sentences took two minutes. We demonstrate the potential of machine learning to gather, track, and analyze symptoms experienced by cancer patients during chemotherapy. Although our initial model requires further optimization to improve the performance, further model building may yield machine learning methods suitable to be deployed in routine clinical care, quality improvement, and research applications. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  9. Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.

    Science.gov (United States)

    Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R

    1999-04-06

    Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.

  10. Sound attenuation in the ear of domestic chickens (Gallus gallus domesticus) as a result of beak opening

    Science.gov (United States)

    Claes, Raf; Dirckx, Joris J. J.

    2017-01-01

    Because the quadrate and the eardrum are connected, the hypothesis was tested that birds attenuate the transmission of sound through their ears by opening the bill, which potentially serves as an additional protective mechanism for self-generated vocalizations. In domestic chickens, it was examined if a difference exists between hens and roosters, given the difference in vocalization capacity between the sexes. To test the hypothesis, vibrations of the columellar footplate were measured ex vivo with laser Doppler vibrometry (LDV) for closed and maximally opened beak conditions, with sounds introduced at the ear canal. The average attenuation was 3.5 dB in roosters and only 0.5 dB in hens. To demonstrate the importance of a putative protective mechanism, audio recordings were performed of a crowing rooster. Sound pressures levels of 133.5 dB were recorded near the ears. The frequency content of the vocalizations was in accordance with the range of highest hearing sensitivity in chickens. The results indicate a small but significant difference in sound attenuation between hens and roosters. However, the amount of attenuation as measured in the experiments on both hens and roosters is small and will provide little effective protection in addition to other mechanisms such as stapedius muscle activity. PMID:29291112

  11. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  13. Making media foundations of sound and image production

    CERN Document Server

    Roberts-Breslin, Jan

    2011-01-01

    Making Media takes the media production process and deconstructs it into its most basic components. Students will learn the basic concepts of media production: frame, sound, light, time, motion, sequencing, etc., and be able to apply them to any medium they choose. They will also become well grounded in the digital work environment and the tools required to produce media in the digital age. The companion Web site provides interactive exercises for each chapter, allowing students to explore the process of media production. The text is heavily illustrated and complete with sidebar discussions of

  14. Beneath sci-fi sound: primer, science fiction sound design, and American independent cinema

    OpenAIRE

    Johnston, Nessa

    2012-01-01

    Primer is a very low budget science-fiction film that deals with the subject of time travel; however, it looks and sounds quite distinctively different from other films associated with the genre. While Hollywood blockbuster sci-fi relies on “sound spectacle” as a key attraction, in contrast Primer sounds “lo-fi” and screen-centred, mixed to two channel stereo rather than the now industry-standard 5.1 surround sound. Although this is partly a consequence of the economics of its production, the...

  15. Brainstem auditory evoked potentials with the use of acoustic clicks and complex verbal sounds in young adults with learning disabilities.

    Science.gov (United States)

    Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos

    2013-01-01

    'other learning disabilities' and who were characterized as with 'light' dyslexia according to dyslexia tests, no significant delays were found in peak latencies A and C and interpeak latencies A-C in comparison with the control group. Acoustic representation of a speech sound and, in particular, the disyllabic word 'baba' was found to be abnormal, as low as the auditory brainstem. Because ABRs mature in early life, this can help to identify subjects with acoustically based learning problems and apply early intervention, rehabilitation, and treatment. Further studies and more experience with more patients and pathological conditions such as plasticity of the auditory system, cochlear implants, hearing aids, presbycusis, or acoustic neuropathy are necessary until this type of testing is ready for clinical application. © 2013 Elsevier Inc. All rights reserved.

  16. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  17. Identifying seizure onset zone from electrocorticographic recordings: A machine learning approach based on phase locking value.

    Science.gov (United States)

    Elahian, Bahareh; Yeasin, Mohammed; Mudigoudar, Basanagoud; Wheless, James W; Babajani-Feremi, Abbas

    2017-10-01

    Using a novel technique based on phase locking value (PLV), we investigated the potential for features extracted from electrocorticographic (ECoG) recordings to serve as biomarkers to identify the seizure onset zone (SOZ). We computed the PLV between the phase of the amplitude of high gamma activity (80-150Hz) and the phase of lower frequency rhythms (4-30Hz) from ECoG recordings obtained from 10 patients with epilepsy (21 seizures). We extracted five features from the PLV and used a machine learning approach based on logistic regression to build a model that classifies electrodes as SOZ or non-SOZ. More than 96% of electrodes identified as the SOZ by our algorithm were within the resected area in six seizure-free patients. In four non-seizure-free patients, more than 31% of the identified SOZ electrodes by our algorithm were outside the resected area. In addition, we observed that the seizure outcome in non-seizure-free patients correlated with the number of non-resected SOZ electrodes identified by our algorithm. This machine learning approach, based on features extracted from the PLV, effectively identified electrodes within the SOZ. The approach has the potential to assist clinicians in surgical decision-making when pre-surgical intracranial recordings are utilized. Copyright © 2017 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  18. Direct speed of sound measurement within the atmosphere during a national holiday in New Zealand

    Science.gov (United States)

    Vollmer, M.

    2018-05-01

    Measuring the speed of sound belongs to almost any physics curriculum. Two methods dominate, measuring resonance phenomena of standing waves or time-of-flight measurements. The second type is conceptually simpler, however, performing such experiments with dimensions of meters usually requires precise electronic time measurement equipment if accurate results are to be obtained. Here a time-of-flight measurement from a video recording is reported with a dimension of several km and an accuracy for the speed of sound of the order of 1%.

  19. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  20. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    Science.gov (United States)

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.