WorldWideScience

Sample records for video recordings sound

  1. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  2. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  3. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  4. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  5. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  6. Real, foley or synthetic? An evaluation of everyday walking sounds

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Grani, Francesco

    2013-01-01

    in using foley sounds for a film track. In particular this work focuses on walking sounds: five different scenes of a walking person were video recorded and each video was then mixed with the three different kind of sounds mentioned above. Subjects were asked to recognise and describe the action performed...

  7. Super VHS video cassette recorder, A-SB88; Super VHS video A-SB88

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A super VHS video cassette recorder, A-SB88, was commercialized having no compromised aspects at all in picture quality, sound quality, operability, energy conservation, design, etc. In the picture quality, the VCR is mounted with the S-ET system capable of realizing a quality comparable to SVHS with a three-dimensional Y/C detached circuit for dynamic moving image detection, three-dimensional DNR(digital noise reduction) and TBC(time base corrector), FE(flying erase) circuit, and a normal tape. In the operability, it is provided with a remote control transfer in large LCD, 400x high speed rewind, reservation system capable of simply reserving a serial drama for example, and a function for searching the end of picture recording; also, in the environmental aspect, the stand-by power consumption was reduced to 1/10 of conventional models (ratio with Toshiba A-BS6 at display power off). (translated by NEDO)

  8. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  9. Implications of the law on video recording in clinical practice.

    Science.gov (United States)

    Henken, Kirsten R; Jansen, Frank Willem; Klein, Jan; Stassen, Laurents P S; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-10-01

    Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health care practice. Jurisprudence was searched to exemplify legislation on video recording in health care. In addition, legislation was translated for different applications of video in health care found in the literature. Three principles in Western law are relevant for video recording in health care practice: (1) regulations on privacy regarding personal data, which apply to the gathering and processing of video data in health care settings; (2) the patient record, in which video data can be stored; and (3) professional secrecy, which protects the privacy of patients including video data. Practical implementation of these principles in video recording in health care does not exist. Practical regulations on video recording in health care for different specifically defined purposes are needed. Innovations in video capture technology that enable video data to be made anonymous automatically can contribute to protection for the privacy of all the people involved.

  10. A software oscilloscope for DOS computers with an integrated remote control for a video tape recorder. The assignment of acoustic events to behavioural observations.

    Science.gov (United States)

    Höller, P

    1995-12-01

    With only a little knowledge of programming IBM compatible computers in Basic, it is possible to create a digital software oscilloscope with sampling rates up to 17 kHz (depending on the CPU- and bus-speed). The only additional hardware requirement is a common sound card compatible with the Soundblaster. The system presented in this paper is built to analyse the direction a flying bat is facing during sound emission. For this reason the system works with some additional hardware devices, in order to monitor video sequences at the computer screen, overlaid by an online oscillogram. Using an RS232-interface for a Panasonic video tape recorder both the oscillogram and the video tape recorder can be controlled simultaneously and moreover be analysed frame by frame. Not only acoustical events, but also APs, myograms, EEGs and other physiological data can be digitized and analysed in combination with the behavioural data of an experimental subject.

  11. Sound and recording applications and theory

    CERN Document Server

    Rumsey, Francis

    2014-01-01

    Providing vital reading for audio students and trainee engineers, this guide is ideal for anyone who wants a solid grounding in both theory and industry practices in audio, sound and recording. There are many books on the market covering ""how to work it"" when it comes to audio equipment-but Sound and Recording isn't one of them. Instead, you'll gain an understanding of ""how it works"" with this approachable guide to audio systems.New to this edition:Digital audio section revised substantially to include the latest developments in audio networking (e.g. RAVENNA, AES X-192, AVB), high-resolut

  12. Clients experience of video recordings of their psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    the current relatively widespread use video one finds only a very limited numbers empirical study of how these recordings is experienced by the clients. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents a qualitative, explorative study of clients’ experiences......Background: Due to the development of technologies and the low costs video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  13. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  14. Selection and evaluation of video tape recorders for surveillance applications

    International Nuclear Information System (INIS)

    Martinez, R.L.

    1988-01-01

    Unattended surveillance places unique requirements on video recorders. One such requireemnt, extended operational reliability, often cannot be determined from the manufacturers' data. Subsequent to market surveys and preliminary testing, the Sony 8mm EVO-210 recorder was selected for use in the Modular Integrated Video System (MIVS), while concurrently undergoing extensive reliability testing. A microprocessor based controller was developed to life test and evaluate the performance of the video cassette recorders. The controller has the capability to insert a unique binary count in the vertical interval of the recorder video signal for each scene. This feature allows for automatic verification of the recorded data using a MIVS Review Station. Initially, twenty recorders were subjected to the accelerated lift test, which involves recording one scene (eight video frames) every 15 seconds. The recorders were operated in the exact manner in which they are utilized in the MIVS. This paper describes the results of the preliminary testing, accelerated life test and the extensive testing on 130 Sony EVO-210 recorders

  15. PVR system design of advanced video navigation reinforced with audible sound

    NARCIS (Netherlands)

    Eerenberg, O.; Aarts, R.; De With, P.N.

    2014-01-01

    This paper presents an advanced video navigation concept for Personal Video Recording (PVR), based on jointly using the primary image and a Picture-in-Picture (PiP) image, featuring combined rendering of normal-play video fragments with audio and fast-search video. The hindering loss of audio during

  16. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  17. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  18. A description of externally recorded womb sounds in human subjects during gestation.

    Science.gov (United States)

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra

  19. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  20. Segmentation of heart sound recordings by a duration-dependent hidden Markov model

    International Nuclear Information System (INIS)

    Schmidt, S E; Graff, C; Toft, E; Struijk, J J; Holst-Hansen, C

    2010-01-01

    Digital stethoscopes offer new opportunities for computerized analysis of heart sounds. Segmentation of heart sound recordings into periods related to the first and second heart sound (S1 and S2) is fundamental in the analysis process. However, segmentation of heart sounds recorded with handheld stethoscopes in clinical environments is often complicated by background noise. A duration-dependent hidden Markov model (DHMM) is proposed for robust segmentation of heart sounds. The DHMM identifies the most likely sequence of physiological heart sounds, based on duration of the events, the amplitude of the signal envelope and a predefined model structure. The DHMM model was developed and tested with heart sounds recorded bedside with a commercially available handheld stethoscope from a population of patients referred for coronary arterioangiography. The DHMM identified 890 S1 and S2 sounds out of 901 which corresponds to 98.8% (CI: 97.8–99.3%) sensitivity in 73 test patients and 13 misplaced sounds out of 903 identified sounds which corresponds to 98.6% (CI: 97.6–99.1%) positive predictivity. These results indicate that the DHMM is an appropriate model of the heart cycle and suitable for segmentation of clinically recorded heart sounds

  1. Multiple Generations on Video Tape Recorders.

    Science.gov (United States)

    Wiens, Jacob H.

    Helical scan video tape recorders were tested for their dubbing characteristics in order to make selection data available to media personnel. The equipment, two recorders of each type tested, was submitted by the manufacturers. The test was designed to produce quality evaluations for three generations of a single tape, thereby encompassing all…

  2. 37 CFR 380.3 - Royalty fees for the public performance of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... EPHEMERAL REPRODUCTIONS § 380.3 Royalty fees for the public performance of sound recordings and for ephemeral recordings. (a) Royalty rates and fees for eligible digital transmissions of sound recordings made...

  3. Implications of the law on video recording in clinical practice

    OpenAIRE

    Henken, Kirsten R.; Jansen, Frank-Willem; Klein, Jan; Stassen, Laurents; Dankelman, Jenny; Dobbelsteen, John

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health ...

  4. Video Recordings in Public Libraries.

    Science.gov (United States)

    Doyle, Stephen

    1984-01-01

    Reports on development and operation of public library collection of video recordings, describes results of user survey conducted over 6-month period, and offers brief guidelines. Potential users, censorship and copyright, organization of collection, fees, damage and loss, funding, purchasing and promotion, formats, processing and cataloging,…

  5. Psychophysiological Assessment Of Fear Experience In Response To Sound During Computer Video Gameplay

    DEFF Research Database (Denmark)

    Garner, Tom Alexander; Grimshaw, Mark

    2013-01-01

    The potential value of a looping biometric feedback system as a key component of adaptive computer video games is significant. Psychophysiological measures are essential to the development of an automated emotion recognition program, capable of interpreting physiological data into models of affect...... and systematically altering the game environment in response. This article presents empirical data the analysis of which advocates electrodermal activity and electromyography as suitable physiological measures to work effectively within a computer video game-based biometric feedback loop, within which sound...

  6. Live lecture versus video-recorded lecture: are students voting with their feet?

    Science.gov (United States)

    Cardall, Scott; Krupat, Edward; Ulrich, Michael

    2008-12-01

    In light of educators' concerns that lecture attendance in medical school has declined, the authors sought to assess students' perceptions, evaluations, and motivations concerning live lectures compared with accelerated, video-recorded lectures viewed online. The authors performed a cross-sectional survey study of all first- and second-year students at Harvard Medical School. Respondents answered questions regarding their lecture attendance; use of class and personal time; use of accelerated, video-recorded lectures; and reasons for viewing video-recorded and live lectures. Other questions asked students to compare how well live and video-recorded lectures satisfied learning goals. Of the 353 students who received questionnaires, 204 (58%) returned responses. Collectively, students indicated watching 57.2% of lectures live, 29.4% recorded, and 3.8% using both methods. All students have watched recorded lectures, and most (88.5%) have used video-accelerating technologies. When using accelerated, video-recorded lecture as opposed to attending lecture, students felt they were more likely to increase their speed of knowledge acquisition (79.3% of students), look up additional information (67.7%), stay focused (64.8%), and learn more (63.7%). Live attendance remains the predominant method for viewing lectures. However, students find accelerated, video-recorded lectures equally or more valuable. Although educators may be uncomfortable with the fundamental change in the learning process represented by video-recorded lecture use, students' responses indicate that their decisions to attend lectures or view recorded lectures are motivated primarily by a desire to satisfy their professional goals. A challenge remains for educators to incorporate technologies students find useful while creating an interactive learning culture.

  7. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  8. 78 FR 40421 - Inquiry Regarding Video Description in Video Programming Distributed on Television and on the...

    Science.gov (United States)

    2013-07-05

    ... description services for television are provided on a secondary audio stream, and typically a consumer can... box. The Commission recently adopted rules requiring apparatus that is designed to receive, play back, or record video programming transmitted simultaneously with sound to make secondary audio streams...

  9. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  10. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  11. Optical Reading and Playing of Sound Signals from Vinyl Records

    OpenAIRE

    Hensman, Arnold; Casey, Kevin

    2007-01-01

    While advanced digital music systems such as compact disk players and MP3 have become the standard in sound reproduction technology, critics claim that conversion to digital often results in a loss of sound quality and richness. For this reason, vinyl records remain the medium of choice for many audiophiles involved in specialist areas. The waveform cut into a vinyl record is an exact replica of the analogue version from the original source. However, while some perceive this media as reproduc...

  12. The influence of video recordings on beginning therapists’ learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  13. Recorded peer video chat as a research and development tool

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Cowie, Bronwen

    2016-01-01

    When practising teachers take time to exchange their experiences and reflect on their teaching realities as critical friends, they add meaning and depth to educational research. When peer talk is facilitated through video chat platforms, teachers can meet (virtually) face to face even when...... recordings were transcribed and used to prompt further discussion. The recording of the video chat meetings provided an opportunity for researchers to listen in and follow up on points they felt needed further unpacking or clarification. The recorded peer video chat conversations provided an additional...... opportunity to stimulate and support teacher participants in a process of critical analysis and reflection on practice. The discussions themselves were empowering because in the absence of the researcher, the teachers, in negotiation with peers, choose what is important enough to them to take time to discuss....

  14. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    Science.gov (United States)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  15. Songbirds use pulse tone register in two voices to generate low-frequency sound

    DEFF Research Database (Denmark)

    Jensen, Kenneth Kragh; Cooper, Brenton G.; Larsen, Ole Næsbye

    2007-01-01

    , the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse...... generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously...

  16. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    K.R. Henken (Kirsten R.); F-W. Jansen (Frank-Willem); J. Klein (Jan); L.P. Stassen (Laurents); J. Dankelman (Jenny); J.J. van den Dobbelsteen (John)

    2012-01-01

    textabstractBackground: Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A

  17. Implications of the law on video recording in clinical practice

    NARCIS (Netherlands)

    Henken, K.R.; Jansen, F.W.; Klein, J.; Stassen, L.P.S.; Dankelman, J.; Van den Dobbelsteen, J.J.

    2012-01-01

    Background Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear

  18. Mobile, portable lightweight wireless video recording solutions for homeland security, defense, and law enforcement applications

    Science.gov (United States)

    Sandy, Matt; Goldburt, Tim; Carapezza, Edward M.

    2015-05-01

    It is desirable for executive officers of law enforcement agencies and other executive officers in homeland security and defense, as well as first responders, to have some basic information about the latest trend on mobile, portable lightweight wireless video recording solutions available on the market. This paper reviews and discusses a number of studies on the use and effectiveness of wireless video recording solutions. It provides insights into the features of wearable video recording devices that offer excellent applications for the category of security agencies listed in this paper. It also provides answers to key questions such as: how to determine the type of video recording solutions most suitable for the needs of your agency, the essential features to look for when selecting a device for your video needs, and the privacy issues involved with wearable video recording devices.

  19. THE DETERMINATION OF THE SHARPNESS DEPTH BORDERS AND CORRESPONDING PHOTOGRAPHY AND VIDEO RECORDING PARAMETERS FOR CONTEMPORARY VIDEO TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    E. G. Zaytseva

    2011-01-01

    Full Text Available The method of determination of the sharpness depth borders was improved for contemporary video technology. The computer programme for determination of corresponding video recording parameters was created.

  20. Seizure semiology inferred from clinical descriptions and from video recordings. How accurate are they?

    DEFF Research Database (Denmark)

    Beniczky, Simona Alexandra; Fogarasi, András; Neufeld, Miri

    2012-01-01

    To assess how accurate the interpretation of seizure semiology is when inferred from witnessed seizure descriptions and from video recordings, five epileptologists analyzed 41 seizures from 30 consecutive patients who had clinical episodes in the epilepsy monitoring unit. For each clinical episode...... for the descriptions (k=0.67) and almost perfect for the video recordings (k=0.95). Video recordings significantly increase the accuracy of seizure interpretation....

  1. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  2. Similarity and pleasantness assessments of water-fountain sounds recorded in urban public spaces.

    Science.gov (United States)

    Ekman, Maria Rådsten; Lundén, Peter; Nilsson, Mats E

    2015-11-01

    Water fountains are potential tools for soundscape improvement, but little is known about their perceptual properties. To explore this, sounds were recorded from 32 fountains installed in urban parks. The sounds were recorded with a sound-field microphone and were reproduced using an ambisonic loudspeaker setup. Fifty-seven listeners assessed the sounds with regard to similarity and pleasantness. Multidimensional scaling of similarity data revealed distinct groups of soft variable and loud steady-state sounds. Acoustically, the soft variable sounds were characterized by low overall levels and high temporal variability, whereas the opposite pattern characterized the loud steady-state sounds. The perceived pleasantness of the sounds was negatively related to their overall level and positively related to their temporal variability, whereas spectral centroid was weakly correlated to pleasantness. However, the results of an additional experiment, using the same sounds set equal in overall level, found a negative relationship between pleasantness and spectral centroid, suggesting that spectral factors may influence pleasantness scores in experiments where overall level does not dominate pleasantness assessments. The equal-level experiment also showed that several loud steady-state sounds remained unpleasant, suggesting an inherently unpleasant sound character. From a soundscape design perspective, it may be advisable to avoid fountains generating such sounds.

  3. The influence of video recordings on beginning therapist’s learning in psychotherapy training

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Olesen, Mette Kirk; Kløve, Astrid

    2010-01-01

    the current relatively widespread use of video, one finds only a very limited number of empirical studies on how these recordings specifically influence the learning process of the beginning therapist. Aim: After a brief discussion of the pro and cons of the use of video recordings this paper presents......Background: Due to the development of technologies and the low costs, video recording of psychotherapy sessions have gained ground in training and supervision. While some praise the advantages others decline to use this technological aid for ethical, theoretical or clinical reasons. Despite...

  4. An experimental digital consumer recorder for MPEG-coded video signals

    NARCIS (Netherlands)

    Saeijs, R.W.J.J.; With, de P.H.N.; Rijckaert, A.M.A.; Wong, C.

    1995-01-01

    The concept and real-time implementation of an experimental home-use digital recorder is presented, capable of recording MPEG-compressed video signals. The system has small recording mechanics based on the DVC standard and it uses MPEG compression for trick-mode signals as well

  5. Evaluating Student Self-Assessment through Video-Recorded Patient Simulations.

    Science.gov (United States)

    Sanderson, Tammy R; Kearney, Rachel C; Kissell, Denise; Salisbury, Jessica

    2016-08-01

    The purpose of this pilot study was to determine if the use of a video-recorded clinical session affects the accuracy of dental hygiene student self-assessment and dental hygiene instructor feedback. A repeated measures experiment was conducted. The use of the ODU 11/12 explorer was taught to students and participating faculty through video and demonstration. Students then demonstrated activation of the explorer on a student partner using the same technique. While faculty completed the student assessment in real time, the sessions were video recorded. After completing the activation of the explorer, students and faculty completed an assessment of the student's performance using a rubric. A week later, both students and faculty viewed the video of the clinical skill performance and reassessed the student's performance using the same rubric. The student videos were randomly assigned a number, so faculty reassessed the performance without access to the student's identity or the score that was initially given. Twenty-eight students and 4 pre-clinical faculty completed the study. Students' average score was 4.68±1.16 on the first assessment and slightly higher 4.89±1.45 when reviewed by video. Faculty average scores were 5.07±2.13 at the first assessment and 4.79±2.54 on the second assessment with the video. No significant differences were found between the differences in overall scores, there was a significant difference in the scores of the grading criteria compared to the expert assessment scores (p=0.0001). This pilot study shows that calibration and assessment without bias in education is a challenge. Analyzing and incorporating new techniques can result in more exact assessment of student performance and self-assessment. Copyright © 2016 The American Dental Hygienists’ Association.

  6. How to implement live video recording in the clinical environment: A practical guide for clinical services.

    Science.gov (United States)

    Lloyd, Adam; Dewar, Alistair; Edgar, Simon; Caesar, Dave; Gowens, Paul; Clegg, Gareth

    2017-06-01

    The use of video in healthcare is becoming more common, particularly in simulation and educational settings. However, video recording live episodes of clinical care is far less routine. To provide a practical guide for clinical services to embed live video recording. Using Kotter's 8-step process for leading change, we provide a 'how to' guide to navigate the challenges required to implement a continuous video-audit system based on our experience of video recording in our emergency department resuscitation rooms. The most significant hurdles in installing continuous video audit in a busy clinical area involve change management rather than equipment. Clinicians are faced with considerable ethical, legal and data protection challenges which are the primary barriers for services that pursue video recording of patient care. Existing accounts of video use rarely acknowledge the organisational and cultural dimensions that are key to the success of establishing a video system. This article outlines core implementation issues that need to be addressed if video is to become part of routine care delivery. By focussing on issues such as staff acceptability, departmental culture and organisational readiness, we provide a roadmap that can be pragmatically adapted by all clinical environments, locally and internationally, that seek to utilise video recording as an approach to improving clinical care. © 2017 John Wiley & Sons Ltd.

  7. Tipping point analysis of a large ocean ambient sound record

    Science.gov (United States)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  8. Video Recording With a GoPro in Hand and Upper Extremity Surgery.

    Science.gov (United States)

    Vara, Alexander D; Wu, John; Shin, Alexander Y; Sobol, Gregory; Wiater, Brett

    2016-10-01

    Video recordings of surgical procedures are an excellent tool for presentations, analyzing self-performance, illustrating publications, and educating surgeons and patients. Recording the surgeon's perspective with high-resolution video in the operating room or clinic has become readily available and advances in software improve the ease of editing these videos. A GoPro HERO 4 Silver or Black was mounted on a head strap and worn over the surgical scrub cap, above the loupes of the operating surgeon. Five live surgical cases were recorded with the camera. The videos were uploaded to a computer and subsequently edited with iMovie or the GoPro software. The optimal settings for both the Silver and Black editions, when operating room lights are used, were determined to be a narrow view, 1080p, 60 frames per second (fps), spot meter on, protune on with auto white balance, exposure compensation at -0.5, and without a polarizing lens. When the operating room lights were not used, it was determined that the standard settings for a GoPro camera were ideal for positioning and editing (4K, 15 frames per second, spot meter and protune off). The GoPro HERO 4 provides high-quality, the surgeon perspective, and a cost-effective video recording of upper extremity surgical procedures. Challenges include finding the optimal settings for each surgical procedure and the length of recording due to battery life limitations. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  9. 37 CFR 262.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... MAKING OF EPHEMERAL REPRODUCTIONS § 262.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) Basic royalty rate. Royalty rates and fees for eligible nonsubscription...

  10. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  11. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  12. 37 CFR 261.3 - Royalty fees for public performances of sound recordings and for ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... § 261.3 Royalty fees for public performances of sound recordings and for ephemeral recordings. (a) For the period October 28, 1998, through December 31, 2002, royalty rates and fees for eligible digital...

  13. Introducing video recording in primary care midwifery for research purposes: procedure, dataset, and use.

    NARCIS (Netherlands)

    Spelten, E.R.; Martin, L.; Gitsels, J.T.; Pereboom, M.T.R.; Hutton, E.K.; Dulmen, S. van

    2015-01-01

    Background: video recording studies have been found to be complex; however very few studies describe the actual introduction and enrolment of the study, the resulting dataset and its interpretation. In this paper we describe the introduction and the use of video recordings of health care provider

  14. WT Bird. Bird collision recording for offshore wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Wiggelinkhuizen, E.J.; Rademakers, L.W.M.M.; Barhorst, S.A.M. [ECN Wind Energy, Petten (Netherlands); Den Boon, H. [E-Connection Project, Bunnik (Netherlands); Dirksen, S. [Bureau Waardenburg, Culemborg (Netherlands); Schekkerman, H. [Alterra, Wageningen (Netherlands)

    2004-11-01

    A new method for monitoring of bird collisions has been developed using video and audio registrations that are triggered by sound and vibration measurements. Remote access to the recorded images and sounds makes it possible to count the number of collisions as well as to identify the species. After the successful proof of principle and evaluation on small land-based turbines the system is now being designed for offshore wind farms. Currently the triggering system and video and audio registration are being tested on large land-based wind turbines using bird dummies. Tests of three complete prototype systems are planned for 2005.

  15. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    Science.gov (United States)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  16. The Technique of the Sound Studio: Radio, Record Production, Television, and Film. Revised Edition.

    Science.gov (United States)

    Nisbett, Alec

    Detailed explanations of the studio techniques used in radio, record, television, and film sound production are presented in as non-technical language as possible. An introductory chapter discusses the physics and physiology of sound. Subsequent chapters detail standards for sound control in the studio; explain the planning and routine of a sound…

  17. 75 FR 63434 - Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording...

    Science.gov (United States)

    2010-10-15

    ...] Availability of Compliance Guide for the Use of Video or Other Electronic Monitoring or Recording Equipment in... the availability of a compliance guide on the use of video or other electronic monitoring or recording... providing this draft guide to advise establishments that video or other electronic monitoring or recording...

  18. 37 CFR 270.2 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  19. 37 CFR 370.3 - Reports of use of sound recordings under statutory license for preexisting subscription services.

    Science.gov (United States)

    2010-07-01

    ... “Intended Playlists” for each channel and each day of the reported month. The “Intended Playlists” shall...; (2) The channel; (3) The sound recording title; (4) The featured recording artist, group, or... sound recording); (6) The marketing label of the commercially available album or other product on which...

  20. Self-Reflection of Video-Recorded High-Fidelity Simulations and Development of Clinical Judgment.

    Science.gov (United States)

    Bussard, Michelle E

    2016-09-01

    Nurse educators are increasingly using high-fidelity simulators to improve prelicensure nursing students' ability to develop clinical judgment. Traditionally, oral debriefing sessions have immediately followed the simulation scenarios as a method for students to connect theory to practice and therefore develop clinical judgment. Recently, video recording of the simulation scenarios is being incorporated. This qualitative, interpretive description study was conducted to identify whether self-reflection on video-recorded high-fidelity simulation (HFS) scenarios helped prelicensure nursing students to develop clinical judgment. Tanner's clinical judgment model was the framework for this study. Four themes emerged from this study: Confidence, Communication, Decision Making, and Change in Clinical Practice. This study indicated that self-reflection of video-recorded HFS scenarios is beneficial for prelicensure nursing students to develop clinical judgment. [J Nurs Educ. 2016;55(9):522-527.]. Copyright 2016, SLACK Incorporated.

  1. Sound recordings of road maintenance equipment on the Lincoln National Forest, New Mexico

    Science.gov (United States)

    D. K. Delaney; T. G. Grubb

    2004-01-01

    The purpose of this pilot study was to record, characterize, and quantify road maintenance activity in Mexican spotted owl (Strix occidentalis lucida) habitat to gauge potential sound level exposure for owls during road maintenance activities. We measured sound levels from three different types of road maintenance equipment (rock crusherlloader,...

  2. Vehicle engine sound design based on an active noise control system

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, M. [Siemens VDO Automotive, Auburn Hills, MI (United States)

    2002-07-01

    A study has been carried out to identify the types of vehicle engine sounds that drivers prefer while driving at different locations and under different driving conditions. An active noise control system controlled the sound at the air intake orifice of a vehicle engine's first sixteen orders and half orders. The active noise control system was used to change the engine sound to quiet, harmonic, high harmonic, spectral shaped and growl. Videos were made of the roads traversed, binaural recording of vehicle interior sounds, and vibrations of the vehicle floor pan. Jury tapes were made up for day driving, nighttime driving and driving in the rain during the day for each of the sites. Jurors used paired comparisons to evaluate the vehicle interior sounds while sitting in a vehicle simulator developed by Siemens VDO that replicated videos of the road traversed, binaural recording of the vehicle interior sounds and vibrations of the floor pan and seat. (orig.) [German] Im Rahmen einer Studie wurden Typen von Motorgeraeuschen identifiziert, die von Fahrern unter verschiedenen Fahrbedingungen als angenehm empfunden werden. Ein System zur aktiven Geraeuschbeeinflussung am Ansauglufteinlass im Bereich des Luftfilters modifizierte den Klang des Motors bis zur 16,5ten Motorordnung, und zwar durch Bedaempfung, Verstaerkung und Filterung der Signalfrequenzen. Waehrend der Fahrt wurden Videoaufnahmen der befahrenen Strassen, Stereoaufnahmen der Fahrzeuginnengeraeusche und Aufnahmen der Vibrationsamplituden des Fahrzeugbodens erstellt; dies bei Tag- und Nachtfahrten und bei Tagfahrten im Regen. Zur Beurteilung der aufgezeichneten Geraeusche durch Versuchspersonen wurde ein Fahrzeug-Laborsimulator mit Fahrersitz, Bildschirm, Lautsprecher und mechanischer Erregung der Bodenplatte aufgebaut, um die aufgenommenen Signale moeglichst wirklichkeitsgetreu wiederzugeben. (orig.)

  3. Examining in vivo tympanic membrane mobility using smart phone video-otoscopy and phase-based Eulerian video magnification

    Science.gov (United States)

    Janatka, Mirek; Ramdoo, Krishan S.; Tatla, Taran; Pachtrachai, Krittin; Elson, Daniel S.; Stoyanov, Danail

    2017-03-01

    The tympanic membrane (TM) is the bridging element between the pressure waves of sound in air and the ossicular chain. It allows for sound to be conducted into the inner ear, achieving the human sense of hearing. Otitis media with effusion (OME, commonly referred to as `glue ear') is a typical condition in infants that prevents the vibration of the TM and causes conductive hearing loss, this can lead to stunting early stage development if undiagnosed. Furthermore, OME is hard to identify in this age group; as they cannot respond to typical audiometry tests. Tympanometry allows for the mobility of the TM to be examined without patient response, but requires expensive apparatus and specialist training. By combining a smartphone equipped with a 240 frames per second video recording capability with an otoscopic clip-on accessory, this paper presents a novel application of Eulerian Video Magnification (EVM) to video-otology, that could provide assistance in diagnosing OME. We present preliminary results showing a spatio-temporal slice taken from an exaggerated video visualization of the TM being excited in vivo on a healthy ear. Our preliminary results demonstrate the potential for using such an approach for diagnosing OME under visual inspection as alternative to tympanometry, which could be used remotely and hence help diagnosis in a wider population pool.

  4. The distinguishing motor features of cataplexy: a study from video-recorded attacks.

    Science.gov (United States)

    Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe

    2018-05-01

    To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.

  5. Viewing speech in action: speech articulation videos in the public domain that demonstrate the sounds of the International Phonetic Alphabet (IPA)

    OpenAIRE

    Nakai, S.; Beavan, D.; Lawson, E.; Leplâtre, G.; Scobbie, J. M.; Stuart-Smith, J.

    2016-01-01

    In this article, we introduce recently released, publicly available resources, which allow users to watch videos of hidden articulators (e.g. the tongue) during the production of various types of sounds found in the world’s languages. The articulation videos on these resources are linked to a clickable International Phonetic Alphabet chart ([International Phonetic Association. 1999. Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. ...

  6. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  7. Multi-Century Record of Anthropogenic Impacts on an Urbanized Mesotidal Estuary: Salem Sound, MA

    Science.gov (United States)

    Salem, MA, located north of Boston, has a rich, well-documented history dating back to settlement in 1626 CE, but the associated anthropogenic impacts on Salem Sound are poorly constrained. This project utilized dated sediment cores from the sound to assess the proxy record of an...

  8. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    Science.gov (United States)

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  9. Comparison of snoring sounds between natural and drug-induced sleep recorded using a smartphone.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Moon, Ji Seung; Lee, Sang Hoon; Lee, Ho Byung; Lee, Sang Jun

    2018-08-01

    Snoring is an important clinical feature of obstructive sleep apnea (OSA), and recent studies suggest that the acoustic quality of snoring sounds is markedly different in drug-induced sleep compared with natural sleep. However, considering differences in sound recording methods and analysis parameters, further studies are required. This study explored whether acoustic analysis of drug-induced sleep is useful as a screening test that reflects the characteristics of natural sleep in snoring patients. The snoring sounds of 30 male subjects (mean age=41.8years) were recorded using a smartphone during natural and induced sleep, with the site of vibration noted during drug-induced sleep endoscopy (DISE); then, we compared the sound intensity (dB), formant frequencies, and spectrograms of snoring sounds. Regarding the intensity of snoring sounds, there were minor differences within the retrolingual level obstruction group, but there was no significant difference between natural and induced sleep at either obstruction site. There was no significant difference in the F 1 and F 2 formant frequencies of snoring sounds between natural sleep and induced sleep at either obstruction site. Compared with natural sleep, induced sleep was slightly more irregular, with a stronger intensity on the spectrogram, but the spectrograms showed the same pattern at both obstruction sites. Although further studies are required, the spectrograms and formant frequencies of the snoring sounds of induced sleep did not differ significantly from those of natural sleep, and may be used as a screening test that reflects the characteristics of natural sleep according to the obstruction site. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  11. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  12. Whose Line Sound is it Anyway? Identifying the Vocalizer on Underwater Video by Localizing with a Hydrophone Array

    Directory of Open Access Journals (Sweden)

    Matthias Hoffmann-Kuhnt

    2016-11-01

    Full Text Available A new device that combined high-resolution (1080p wide-angle video and three channels of high-frequency acoustic recordings (at 500 kHz per channel in a portable underwater housing was designed and tested with wild bottlenose and spotted dolphins in the Bahamas. It consisted of three hydrophones, a GoPro camera, a small Fit PC, a set of custom preamplifiers and a high-frequency data acquisition board. Recordings were obtained to identify individual vocalizing animals through time-delay-of-arrival localizing in post-processing. The calculated source positions were then overlaid onto the video – providing the ability to identify the vocalizing animal on the recorded video. The new tool allowed for much clearer analysis of the acoustic behavior of cetaceans than was possible before.

  13. Music Videos: The Look of the Sound

    Science.gov (United States)

    Aufderheide, Pat

    1986-01-01

    Asserts that music videos, rooted in mass marketing culture, are reshaping the language of advertising, affecting the flow of information. Raises question about the society that creates and receives music videos. (MS)

  14. 77 FR 47120 - Distribution of 2011 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2012-08-07

    ... the motion to ascertain whether any claimant entitled to receive such royalty fees has a reasonable... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2012-3 CRB DD 2011] Distribution of 2011 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  15. 37 CFR 382.12 - Royalty fees for the public performance of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the public... Preexisting Satellite Digital Audio Radio Services § 382.12 Royalty fees for the public performance of sound recordings and the making of ephemeral recordings. (a) In general. The monthly royalty fee to be paid by a...

  16. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    Science.gov (United States)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However

  17. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio.......This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...

  18. 75 FR 3666 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-01-22

    ... additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound recordings.... 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in general... Collective, late fees, statements of account, audit and verification of royalty payments and distributions...

  19. 76 FR 56483 - Distribution of 2010 DART Sound Recordings Fund Royalties

    Science.gov (United States)

    2011-09-13

    ... responses to the motion to ascertain whether any claimant entitled to receive such royalty fees has a... LIBRARY OF CONGRESS Copyright Royalty Board [Docket No. 2011-6 CRB DD 2010] Distribution of 2010 DART Sound Recordings Fund Royalties AGENCY: Copyright Royalty Board, Library of Congress. ACTION...

  20. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  1. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  2. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  3. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  4. On doing two things at once: dolphin brain and nose coordinate sonar clicks, buzzes and emotional squeals with social sounds during fish capture.

    Science.gov (United States)

    Ridgway, Sam; Samuelson Dibble, Dianna; Van Alstyne, Kaitlin; Price, DruAnn

    2015-12-01

    Dolphins fishing alone in open waters may whistle without interrupting their sonar clicks as they find and eat or reject fish. Our study is the first to match sound and video from the dolphin with sound and video from near the fish. During search and capture of fish, free-swimming dolphins carried cameras to record video and sound. A hydrophone in the far field near the fish also recorded sound. From these two perspectives, we studied the time course of dolphin sound production during fish capture. Our observations identify the instant of fish capture. There are three consistent acoustic phases: sonar clicks locate the fish; about 0.4 s before capture, the dolphin clicks become more rapid to form a second phase, the terminal buzz; at or just before capture, the buzz turns to an emotional squeal (the victory squeal), which may last 0.2 to 20 s after capture. The squeals are pulse bursts that vary in duration, peak frequency and amplitude. The victory squeal may be a reflection of emotion triggered by brain dopamine release. It may also affect prey to ease capture and/or it may be a way to communicate the presence of food to other dolphins. Dolphins also use whistles as communication or social sounds. Whistling during sonar clicking suggests that dolphins may be adept at doing two things at once. We know that dolphin brain hemispheres may sleep independently. Our results suggest that the two dolphin brain hemispheres may also act independently in communication. © 2015. Published by The Company of Biologists Ltd.

  5. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  6. Design of a system based on DSP and FPGA for video recording and replaying

    Science.gov (United States)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  7. Computer analysis of sound recordings from two Anasazi sites in northwestern New Mexico

    Science.gov (United States)

    Loose, Richard

    2002-11-01

    Sound recordings were made at a natural outdoor amphitheater in Chaco Canyon and in a reconstructed great kiva at Aztec Ruins. Recordings included computer-generated tones and swept sine waves, classical concert flute, Native American flute, conch shell trumpet, and prerecorded music. Recording equipment included analog tape deck, digital minidisk recorder, and direct digital recording to a laptop computer disk. Microphones and geophones were used as transducers. The natural amphitheater lies between the ruins of Pueblo Bonito and Chetro Ketl. It is a semicircular arc in a sandstone cliff measuring 500 ft. wide and 75 ft. high. The radius of the arc was verified with aerial photography, and an acoustic ray trace was generated using cad software. The arc is in an overhanging cliff face and brings distant sounds to a line focus. Along this line, there are unusual acoustic effects at conjugate foci. Time history analysis of recordings from both sites showed that a 60-dB reverb decay lasted from 1.8 to 2.0 s, nearly ideal for public performances of music. Echoes from the amphitheater were perceived to be upshifted in pitch, but this was not seen in FFT analysis. Geophones placed on the floor of the great kiva showed a resonance at 95 Hz.

  8. Simultaneous recording of EEG and electromyographic polygraphy increases the diagnostic yield of video-EEG monitoring.

    Science.gov (United States)

    Hill, Aron T; Briggs, Belinda A; Seneviratne, Udaya

    2014-06-01

    To investigate the usefulness of adjunctive electromyographic (EMG) polygraphy in the diagnosis of clinical events captured during long-term video-EEG monitoring. A total of 40 patients (21 women, 19 men) aged between 19 and 72 years (mean 43) investigated using video-EEG monitoring were studied. Electromyographic activity was simultaneously recorded with EEG in four patients selected on clinical grounds. In these patients, surface EMG electrodes were placed over muscles suspected to be activated during a typical clinical event. Of the 40 patients investigated, 24 (60%) were given a diagnosis, whereas 16 (40%) remained undiagnosed. All four patients receiving adjunctive EMG polygraphy obtained a diagnosis, with three of these diagnoses being exclusively reliant on the EMG recordings. Specifically, one patient was diagnosed with propriospinal myoclonus, another patient was diagnosed with facio-mandibular myoclonus, and a third patient was found to have bruxism and periodic leg movements of sleep. The information obtained from surface EMG recordings aided the diagnosis of clinical events captured during video-EEG monitoring in 7.5% of the total cohort. This study suggests that EEG-EMG polygraphy may be used as a technique of improving the diagnostic yield of video-EEG monitoring in selected cases.

  9. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. 37 CFR 260.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d..., 2007, a Licensee's monthly royalty fee for the public performance of sound recordings pursuant to 17 U...

  11. Surround by Sound: A Review of Spatial Audio Recording and Reproduction

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2017-05-01

    Full Text Available In this article, a systematic overview of various recording and reproduction techniques for spatial audio is presented. While binaural recording and rendering is designed to resemble the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears, soundfield recording and reproduction using a large number of microphones and loudspeakers replicate an acoustic scene within a region. These two fundamentally different types of techniques are discussed in the paper. A recent popular area, multi-zone reproduction, is also briefly reviewed in the paper. The paper is concluded with a discussion of the current state of the field and open problems.

  12. Game Sound from Behind the Sofa

    DEFF Research Database (Denmark)

    Garner, Tom Alexander

    2013-01-01

    The central concern of this thesis is upon the processes by which human beings perceive sound and experience emotions within a computer video gameplay context. The potential of quantitative sound parameters to evoke and modulate emotional experience is explored, working towards the development...... that provide additional support of the hypothetical frameworks: an ecological process of fear, a fear-related model of virtual and real acoustic ecologies, and an embodied virtual acoustic ecology framework. It is intended that this thesis will clearly support more effective and efficient sound design...... practices and also improve awareness of the capacity of sound to generate significant emotional experiences during computer video gameplay. It is further hoped that this thesis will elucidate the potential of biometrics/psychophysiology to allow game designers to better understand the player and to move...

  13. 37 CFR 270.1 - Notice of use of sound recordings under statutory license.

    Science.gov (United States)

    2010-07-01

    ..., and the primary purpose of the service is not to sell, advertise, or promote particular products or services other than sound recordings, live concerts, or other music-related events. (iv) A new subscription...

  14. Direct speed of sound measurement within the atmosphere during a national holiday in New Zealand

    Science.gov (United States)

    Vollmer, M.

    2018-05-01

    Measuring the speed of sound belongs to almost any physics curriculum. Two methods dominate, measuring resonance phenomena of standing waves or time-of-flight measurements. The second type is conceptually simpler, however, performing such experiments with dimensions of meters usually requires precise electronic time measurement equipment if accurate results are to be obtained. Here a time-of-flight measurement from a video recording is reported with a dimension of several km and an accuracy for the speed of sound of the order of 1%.

  15. Home Video Telemetry vs inpatient telemetry: A comparative study looking at video quality

    Directory of Open Access Journals (Sweden)

    Sutapa Biswas

    Full Text Available Objective: To compare the quality of home video recording with inpatient telemetry (IPT to evaluate our current Home Video Telemetry (HVT practice. Method: To assess our HVT practice, a retrospective comparison of the video quality against IPT was conducted with the latter as the gold standard. A pilot study had been conducted in 2008 on 5 patients.Patients (n = 28 were included in each group over a period of one year.The data was collected from referral spreadsheets, King’s EPR and telemetry archive.Scoring of the events captured was by consensus using two scorers.The variables compared included: visibility of the body part of interest, visibility of eyes, time of event, illumination, contrast, sound quality and picture clarity when amplified to 200%.Statistical evaluation was carried out using Shapiro–Wilk and Chi-square tests. The P-value of ⩽0.05 was considered statistically significant. Results: Significant differences were demonstrated in lighting and contrast between the two groups (HVT performed better in both.Amplified picture quality was slightly better in the HVT group. Conclusion: Video quality of HVT is comparable to IPT, even surpassing IPT in certain aspects such as the level of illumination and contrast. Results were reconfirmed in a larger sample of patients with more variables. Significance: Despite the user and environmental variability in HVT, it looks promising and can be seriously considered as a preferable alternative for patients who may require investigation at locations remote from an EEG laboratory. Keywords: Home Video Telemetry, EEG, Home video monitoring, Video quality

  16. Comparison of cardiopulmonary resuscitation techniques using video camera recordings.

    OpenAIRE

    Mann, C J; Heyworth, J

    1996-01-01

    OBJECTIVE--To use video recordings to compare the performance of resuscitation teams in relation to their previous training in cardiac resuscitation. METHODS--Over a 10 month period all cardiopulmonary resuscitations carried out in an accident and emergency (A&E) resuscitation room were videotaped. The following variables were monitored: (1) time to perform three defibrillatory shocks; (2) time to give intravenous adrenaline (centrally or peripherally); (3) the numbers and grade of medical an...

  17. Usefulness of video images from a X-ray simulator in recordings of the treatment portal of pulmonary lesion

    International Nuclear Information System (INIS)

    Nishioka, Masayuki; Sakurai, Makoto; Fujioka, Tomio; Fukuoka, Masahiro; Kusunoki, Yoko; Nakajima, Toshifumi; Onoyama, Yasuto.

    1992-01-01

    Movement of the target volume should be taken into consideration in treatment planning. Respiratory movement is the greatest motion in radiotherapy for the pulmonary lesion. We combined video with a X-ray simulator to record movement. Of 50 patients whose images were recorded, respiratory movements of 0 to 4 mm, of 5 to 9 mm, and of more than 10 mm were observed in 13, 21, and 16 patients, respectively. Discrepancies of 5 to 9 mm and of more than 10 mm between simulator films and video images were observed in 14 and 13 patients, respectively. These results show that video images are useful in recording the movement while considering respiratory motion. We recommend that video system added to a X-ray simulator is used for treatment planning, especially in radiotherapy for the pulmonary lesion. (author)

  18. Guest blog: Jacob Davidsen and Paul McIlvenny on Experiments with Big Video

    DEFF Research Database (Denmark)

    Davidsen, Jacob; Mcilvenny, Paul Bruce

    2016-01-01

    How good are your video records? One angle? Two? Wide-angle? Was the camera static or did you move to catch things – and miss other things? How good was the sound? All of us have occasionally been frustrated with what we find on the screen when we come to analyse it, but Jacob Davidsen and Paul M...

  19. Learning with Sound Recordings: A History of Suzuki's Mediated Pedagogy

    Science.gov (United States)

    Thibeault, Matthew D.

    2018-01-01

    This article presents a history of mediated pedagogy in the Suzuki Method, the first widespread approach to learning an instrument in which sound recordings were central. Media are conceptualized as socially constituted: philosophical ideas, pedagogic practices, and cultural values that together form a contingent and changing technological…

  20. Using Grounded Theory to Analyze Qualitative Observational Data that is Obtained by Video Recording

    Directory of Open Access Journals (Sweden)

    Colin Griffiths

    2013-06-01

    Full Text Available This paper presents a method for the collection and analysis of qualitative data that is derived by observation and that may be used to generate a grounded theory. Video recordings were made of the verbal and non-verbal interactions of people with severe and complex disabilities and the staff who work with them. Three dyads composed of a student/teacher or carer and a person with a severe or profound intellectual disability were observed in a variety of different activities that took place in a school. Two of these recordings yielded 25 minutes of video, which was transcribed into narrative format. The nature of the qualitative micro data that was captured is described and the fit between such data and classic grounded theory is discussed. The strengths and weaknesses of the use of video as a tool to collect data that is amenable to analysis using grounded theory are considered. The paper concludes by suggesting that using classic grounded theory to analyze qualitative data that is collected using video offers a method that has the potential to uncover and explain patterns of non-verbal interactions that were not previously evident.

  1. Evidence of sound production by spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain

    Science.gov (United States)

    Johnson, Nicholas S.; Higgs, Dennis; Binder, Thomas R.; Marsden, J. Ellen; Buchinger, Tyler John; Brege, Linnea; Bruning, Tyler; Farha, Steve A.; Krueger, Charles C.

    2018-01-01

    Two sounds associated with spawning lake trout (Salvelinus namaycush) in lakes Huron and Champlain were characterized by comparing sound recordings to behavioral data collected using acoustic telemetry and video. These sounds were named growls and snaps, and were heard on lake trout spawning reefs, but not on a non-spawning reef, and were more common at night than during the day. Growls also occurred more often during the spawning period than the pre-spawning period, while the trend for snaps was reversed. In a laboratory flume, sounds occurred when male lake trout were displaying spawning behaviors; growls when males were quivering and parallel swimming, and snaps when males moved their jaw. Combining our results with the observation of possible sound production by spawning splake (Salvelinus fontinalis × Salvelinus namaycush hybrid), provides rare evidence for spawning-related sound production by a salmonid, or any other fish in the superorder Protacanthopterygii. Further characterization of these sounds could be useful for lake trout assessment, restoration, and control.

  2. 37 CFR 383.3 - Royalty fees for public performances of sound recordings and the making of ephemeral recordings.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for public... SUBSCRIPTION SERVICES § 383.3 Royalty fees for public performances of sound recordings and the making of... regulations for all years 2007 and earlier. Such fee shall be recoupable and credited against royalties due in...

  3. Add Audio and Video to Your Site

    CERN Document Server

    MacDonald, Matthew

    2010-01-01

    Nothing spices up websites like cool sound effects (think ker-thunk as visitors press a button) or embedded videos. Think you need a programmer to add sizzle to your site? Think again. This hands-on guide gives you the techniques you need to add video, music, animated GIFs, and sound effects to your site. This Mini Missing Manual is excerpted from Creating a Web Site: The Missing Manual.

  4. Let's Make a Movie: Investigating Pre-Service Teachers' Reflections on Using Video Recorded Role Playing Cases in Turkey

    Science.gov (United States)

    Koc, Mustafa

    2011-01-01

    This study examined the potential consequences of using student-filmed video cases in the study of classroom management in teacher education. Pre-service teachers in groups were engaged in video-recorded role playing to simulate classroom memoirs. Each group shared their video cases and interpretations in a class presentation. Qualitative data…

  5. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population.

    Science.gov (United States)

    Bokov, Plamen; Mahut, Bruno; Flaud, Patrice; Delclaux, Christophe

    2016-03-01

    Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Localizing wushu players on a platform based on a video recording

    Science.gov (United States)

    Peczek, Piotr M.; Zabołotny, Wojciech M.

    2017-08-01

    This article describes the development of a method to localize an athlete during sports performance on a platform, based on a static video recording. Considered sport for this method is wushu - martial art. However, any other discipline can be applied. There are specified requirements, and 2 algorithms of image processing are described. The next part presents an experiment that was held based on recordings from the Pan American Wushu Championship. Based on those recordings the steps of the algorithm are shown. Results are evaluated manually. The last part of the article concludes if the algorithm is applicable and what improvements have to be implemented to use it during sports competitions as well as for offline analysis.

  7. Improvement of Skills in Cardiopulmonary Resuscitation of Pediatric Residents by Recorded Video Feedbacks.

    Science.gov (United States)

    Anantasit, Nattachai; Vaewpanich, Jarin; Kuptanon, Teeradej; Kamalaporn, Haruitai; Khositseth, Anant

    2016-11-01

    To evaluate the pediatric residents' cardiopulmonary resuscitation (CPR) skills, and their improvements after recorded video feedbacks. Pediatric residents from a university hospital were enrolled. The authors surveyed the level of pediatric resuscitation skill confidence by a questionnaire. Eight psychomotor skills were evaluated individually, including airway, bag-mask ventilation, pulse check, prompt starting and technique of chest compression, high quality CPR, tracheal intubation, intraosseous, and defibrillation. The mock code skills were also evaluated as a team using a high-fidelity mannequin simulator. All the participants attended a concise Pediatric Advanced Life Support (PALS) lecture, and received video-recorded feedback for one hour. They were re-evaluated 6 wk later in the same manner. Thirty-eight residents were enrolled. All the participants had a moderate to high level of confidence in their CPR skills. Over 50 % of participants had passed psychomotor skills, except the bag-mask ventilation and intraosseous skills. There was poor correlation between their confidence and passing the psychomotor skills test. After course feedback, the percentage of high quality CPR skill in the second course test was significantly improved (46 % to 92 %, p = 0.008). The pediatric resuscitation course should still remain in the pediatric resident curriculum and should be re-evaluated frequently. Video-recorded feedback on the pitfalls during individual CPR skills and mock code case scenarios could improve short-term psychomotor CPR skills and lead to higher quality CPR performance.

  8. In Pursuit of Reciprocity: Researchers, Teachers, and School Reformers Engaged in Collaborative Analysis of Video Records

    Science.gov (United States)

    Curry, Marnie W.

    2012-01-01

    In the ideal, reciprocity in qualitative inquiry occurs when there is give-and-take between researchers and the researched; however, the demands of the academy and resource constraints often make the pursuit of reciprocity difficult. Drawing on two video-based, qualitative studies in which researchers utilized video records as resources to enhance…

  9. EEG in the classroom: Synchronised neural recordings during video presentation

    DEFF Research Database (Denmark)

    Poulsen, Andreas Trier; Kamronn, Simon Due; Dmochowski, Jacek

    2017-01-01

    We performed simultaneous recordings of electroencephalography (EEG) from multiple students in a classroom, and measured the inter-subject correlation (ISC) of activity evoked by a common video stimulus. The neural reliability, as quantified by ISC, has been linked to engagement and attentional......-evoked neural responses, known to be modulated by attention, can be tracked for groups of students with synchronized EEG acquisition. This is a step towards real-time inference of engagement in the classroom....

  10. DESIGN AND APPLICATION OF SENSOR FOR RECORDING SOUNDS OVER HUMAN EYE AND NOSE

    NARCIS (Netherlands)

    JOURNEE, HL; VANBRUGGEN, AC; VANDERMEER, JJ; DEJONGE, AB; MOOIJ, JJA

    The recording of sounds over the oribt of the eye has been found to be useful in the detection of intracranial aneurysms. A hydrophone for auscultation over the eye has been developed and is tested under controlled conditions. The tests consist of measurement over the eyes in three healthy

  11. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  12. Sounds of silence: How to animate virtual worlds with sound

    Science.gov (United States)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  13. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  14. Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.

    Science.gov (United States)

    Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R

    2018-06-01

    The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.

  15. The client’s ideas and fantasies of the supervisor in video recorded psychotherapy

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Jensen, Karen Boelt; Madsen, Ninna Skov

    2010-01-01

    Aim: Despite the current relatively widespread use of video as a supervisory tool, there are few empirical studies on how recordings influence the relationship between client and supervisor. This paper presents a qualitative, explorative study of clients’ experience of having their psychotherapy...

  16. 3D reconstruction of cystoscopy videos for comprehensive bladder records.

    Science.gov (United States)

    Lurie, Kristen L; Angst, Roland; Zlatev, Dimitar V; Liao, Joseph C; Ellerbee Bowden, Audrey K

    2017-04-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ. A key aspect of our strategy is the use of advanced computer vision techniques and unmodified, clinical-grade endoscopy hardware with few constraints on the image acquisition protocol, which presents a low barrier to clinical translation. We validate the accuracy and robustness of our reconstruction and co-registration method using cystoscopy videos from tissue-mimicking bladder phantoms and show clinical utility during cystoscopy in the operating room for bladder cancer evaluation. As our method can powerfully augment the visual medical record of the appearance of internal organs, it is broadly applicable to endoscopy and represents a significant advance in cancer surveillance opportunities for big-data cancer research.

  17. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  18. Does Wearable Medical Technology With Video Recording Capability Add Value to On-Call Surgical Evaluations?

    Science.gov (United States)

    Gupta, Sameer; Boehme, Jacqueline; Manser, Kelly; Dewar, Jannine; Miller, Amie; Siddiqui, Gina; Schwaitzberg, Steven D

    2016-10-01

    Background Google Glass has been used in a variety of medical settings with promising results. We explored the use and potential value of an asynchronous, near-real time protocol-which avoids transmission issues associated with real-time applications-for recording, uploading, and viewing of high-definition (HD) visual media in the emergency department (ED) to facilitate remote surgical consults. Study Design First-responder physician assistants captured pertinent aspects of the physical examination and diagnostic imaging using Google Glass' HD video or high-resolution photographs. This visual media were then securely uploaded to the study website. The surgical consultation then proceeded over the phone in the usual fashion and a clinical decision was made. The surgeon then accessed the study website to review the uploaded video. This was followed by a questionnaire regarding how the additional data impacted the consultation. Results The management plan changed in 24% (11) of cases after surgeons viewed the video. Five of these plans involved decision making regarding operative intervention. Although surgeons were generally confident in their initial management plan, confidence scores increased further in 44% (20) of cases. In addition, we surveyed 276 ED patients on their opinions regarding concerning the practice of health care providers wearing and using recording devices in the ED. The survey results revealed that the majority of patients are amenable to the addition of wearable technology with video functionality to their care. Conclusions This study demonstrates the potential value of a medically dedicated, hands-free, HD recording device with internet connectivity in facilitating remote surgical consultation. © The Author(s) 2016.

  19. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  20. Application of video recording technology to improve husbandry and reproduction in the carmine bee-eater (Merops n. nubicus).

    Science.gov (United States)

    Ferrie, Gina M; Sky, Christy; Schutz, Paul J; Quinones, Glorieli; Breeding, Shawnlei; Plasse, Chelle; Leighty, Katherine A; Bettinger, Tammie L

    2016-01-01

    Incorporating technology with research is becoming increasingly important to enhance animal welfare in zoological settings. Video technology is used in the management of avian populations to facilitate efficient information collection on aspects of avian reproduction that are impractical or impossible to obtain through direct observation. Disney's Animal Kingdom(®) maintains a successful breeding colony of Northern carmine bee-eaters. This African species is a cavity nester, making their nesting behavior difficult to study and manage in an ex situ setting. After initial research focused on developing a suitable nesting environment, our goal was to continue developing methods to improve reproductive success and increase likelihood of chicks fledging. We installed infrared bullet cameras in five nest boxes and connected them to a digital video recording system, with data recorded continuously through the breeding season. We then scored and summarized nesting behaviors. Using remote video methods of observation provided much insight into the behavior of the birds in the colony's nest boxes. We observed aggression between birds during the egg-laying period, and therefore immediately removed all of the eggs for artificial incubation which completely eliminated egg breakage. We also used observations of adult feeding behavior to refine chick hand-rearing diet and practices. Although many video recording configurations have been summarized and evaluated in various reviews, we found success with the digital video recorder and infrared cameras described here. Applying emerging technologies to cavity nesting avian species is a necessary addition to improving management in and sustainability of zoo avian populations. © 2015 Wiley Periodicals, Inc.

  1. Observing the Testing Effect using Coursera Video-recorded Lectures: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Paul Zhihao eYONG

    2016-01-01

    Full Text Available We investigated the testing effect in Coursera video-based learning. One hundred and twenty-three participants either (a studied an instructional video-recorded lecture four times, (b studied the lecture three times and took one recall test, or (c studied the lecture once and took three tests. They then took a final recall test, either immediately or a week later, through which their learning was assessed. Whereas repeated studying produced better recall performance than did repeated testing when the final test was administered immediately, testing produced better performance when the final test was delayed until a week after. The testing effect was observed using Coursera lectures. Future directions are documented.

  2. Acoustic analysis of snoring sounds recorded with a smartphone according to obstruction site in OSAS patients.

    Science.gov (United States)

    Koo, Soo Kweon; Kwon, Soon Bok; Kim, Yang Jae; Moon, J I Seung; Kim, Young Jun; Jung, Sung Hoon

    2017-03-01

    Snoring is a sign of increased upper airway resistance and is the most common symptom suggestive of obstructive sleep apnea. Acoustic analysis of snoring sounds is a non-invasive diagnostic technique and may provide a screening test that can determine the location of obstruction sites. We recorded snoring sounds according to obstruction level, measured by DISE, using a smartphone and focused on the analysis of formant frequencies. The study group comprised 32 male patients (mean age 42.9 years). The spectrogram pattern, intensity (dB), fundamental frequencies (F 0 ), and formant frequencies (F 1 , F 2 , and F 3 ) of the snoring sounds were analyzed for each subject. On spectrographic analysis, retropalatal level obstruction tended to produce sharp and regular peaks, while retrolingual level obstruction tended to show peaks with a gradual onset and decay. On formant frequency analysis, F 1 (retropalatal level vs. retrolingual level: 488.1 ± 125.8 vs. 634.7 ± 196.6 Hz) and F 2 (retropalatal level vs. retrolingual level: 1267.3 ± 306.6 vs. 1723.7 ± 550.0 Hz) of retrolingual level obstructions showed significantly higher values than retropalatal level obstruction (p smartphone can be effective for recording snoring sounds.

  3. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional

  4. Video Recording and the Research Process

    Science.gov (United States)

    Leung, Constant; Hawkins, Margaret R.

    2011-01-01

    This is a two-part discussion. Part 1 is entitled "English Language Learning in Subject Lessons", and Part 2 is titled "Video as a Research Tool/Counterpoint". Working with different research concerns, the authors attempt to draw attention to a set of methodological and theoretical issues that have emerged in the research process using video data.…

  5. Analyzing communication skills of Pediatric Postgraduate Residents in Clinical Encounter by using video recordings.

    Science.gov (United States)

    Bari, Attia; Khan, Rehan Ahmed; Jabeen, Uzma; Rathore, Ahsan Waheed

    2017-01-01

    To analyze communication skills of pediatric postgraduate residents in clinical encounter by using video recordings. This qualitative exploratory research was conducted through video recording at The Children's Hospital Lahore, Pakistan. Residents who had attended the mandatory communication skills workshop offered by CPSP were included. The video recording of clinical encounter was done by a trained audiovisual person while the resident was interacting with the patient in the clinical encounter. Data was analyzed by thematic analysis. Initially on open coding 36 codes emerged and then through axial and selective coding these were condensed to 17 subthemes. Out of these four main themes emerged: (1) Courteous and polite attitude, (2) Marginal nonverbal communication skills, (3) Power game/Ignoring child participation and (4) Patient as medical object/Instrumental behaviour. All residents treated the patient as a medical object to reach a right diagnosis and ignored them as a human being. There was dominant role of doctors and marginal nonverbal communication skills were displayed by the residents in the form of lack of social touch, and appropriate eye contact due to documenting notes. A brief non-medical interaction for rapport building at the beginning of interaction was missing and there was lack of child involvement. Paediatric postgraduate residents were polite while communicating with parents and child but lacking in good nonverbal communication skills. Communication pattern in our study was mostly one-way showing doctor's instrumental behaviour and ignoring the child participation.

  6. Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today's learning environment?

    Science.gov (United States)

    Seif, Gretchen A; Brown, Debora

    2013-01-01

    It is difficult to provide real-world learning experiences for students to master clinical and communication skills. The purpose of this paper is to describe a novel instructional method using self- and peer-assessment, reflection, and technology to help students develop effective interpersonal and clinical skills. The teaching method is described by the constructivist learning theory and incorporates the use of educational technology. The learning activities were incorporated into the pre-clinical didactic curriculum. The students participated in two video-recording assignments and performed self-assessments on each and had a peer-assessment on the second video-recording. The learning activity was evaluated through the self- and peer-assessments and an instructor-designed survey. This evaluation identified several themes related to the assignment, student performance, clinical behaviors and establishing rapport. Overall the students perceived that the learning activities assisted in the development of clinical and communication skills prior to direct patient care. The use of video recordings of a simulated history and examination is a unique learning activity for preclinical PT students in the development of clinical and communication skills.

  7. 76 FR 45695 - Notice and Recordkeeping for Use of Sound Recordings Under Statutory License

    Science.gov (United States)

    2011-08-01

    ... operating under these licenses are required to, among other things, pay royalty fees and report to copyright... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Parts 370 and 382 [Docket No. RM 2011-5] Notice and Recordkeeping for Use of Sound Recordings Under Statutory License AGENCY: Copyright Royalty Board...

  8. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    Science.gov (United States)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  9. Noise detection in heart sound recordings.

    Science.gov (United States)

    Zia, Mohammad K; Griffel, Benjamin; Fridman, Vladimir; Saponieri, Cesare; Semmlow, John L

    2011-01-01

    Coronary artery disease (CAD) is the leading cause of death in the United States. Although progression of CAD can be controlled using drugs and diet, it is usually detected in advanced stages when invasive treatment is required. Current methods to detect CAD are invasive and/or costly, hence not suitable as a regular screening tool to detect CAD in early stages. Currently, we are developing a noninvasive and cost-effective system to detect CAD using the acoustic approach. This method identifies sounds generated by turbulent flow through partially narrowed coronary arteries to detect CAD. The limiting factor of this method is sensitivity to noises commonly encountered in the clinical setting. Because the CAD sounds are faint, these noises can easily obscure the CAD sounds and make detection impossible. In this paper, we propose a method to detect and eliminate noise encountered in the clinical setting using a reference channel. We show that our method is effective in detecting noise, which is essential to the success of the acoustic approach.

  10. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  11. Effect of a Neonatal Resuscitation Course on Healthcare Providers' Performances Assessed by Video Recording in a Low-Resource Setting.

    Science.gov (United States)

    Trevisanuto, Daniele; Bertuola, Federica; Lanzoni, Paolo; Cavallin, Francesco; Matediana, Eduardo; Manzungu, Olivier Wingi; Gomez, Ermelinda; Da Dalt, Liviana; Putoto, Giovanni

    2015-01-01

    We assessed the effect of an adapted neonatal resuscitation program (NRP) course on healthcare providers' performances in a low-resource setting through the use of video recording. A video recorder, mounted to the radiant warmers in the delivery rooms at Beira Central Hospital, Mozambique, was used to record all resuscitations. One-hundred resuscitations (50 before and 50 after participation in an adapted NRP course) were collected and assessed based on a previously published score. All 100 neonates received initial steps; from these, 77 and 32 needed bag-mask ventilation (BMV) and chest compressions (CC), respectively. There was a significant improvement in resuscitation scores in all levels of resuscitation from before to after the course: for "initial steps", the score increased from 33% (IQR 28-39) to 44% (IQR 39-56), pproviders improved after participation in an adapted NRP course. Video recording was well-accepted by the staff, useful for objective assessment of performance during resuscitation, and can be used as an educational tool in a low-resource setting.

  12. Generation of ultra-sound during tape peeling

    KAUST Repository

    Marston, Jeremy O.

    2014-03-21

    We investigate the generation of the screeching sound commonly heard during tape peeling using synchronised high-speed video and audio acquisition. We determine the peak frequencies in the audio spectrum and, in addition to a peak frequency at the upper end of the audible range (around 20 kHz), we find an unexpected strong sound with a high-frequency far above the audible range, typically around 50 kHz. Using the corresponding video data, the origins of the key frequencies are confirmed as being due to the substructure "fracture" bands, which we herein observe in both high-speed continuous peeling motions and in the slip phases for stick-slip peeling motions.

  13. Generation of ultra-sound during tape peeling

    KAUST Repository

    Marston, Jeremy O.; Riker, Paul W.; Thoroddsen, Sigurdur T

    2014-01-01

    We investigate the generation of the screeching sound commonly heard during tape peeling using synchronised high-speed video and audio acquisition. We determine the peak frequencies in the audio spectrum and, in addition to a peak frequency at the upper end of the audible range (around 20 kHz), we find an unexpected strong sound with a high-frequency far above the audible range, typically around 50 kHz. Using the corresponding video data, the origins of the key frequencies are confirmed as being due to the substructure "fracture" bands, which we herein observe in both high-speed continuous peeling motions and in the slip phases for stick-slip peeling motions.

  14. NOAA Climate Data Record (CDR) of Advanced Microwave Sounding Unit (AMSU)-A Brightness Temperature, Version 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Climate Data Record (CDR) for Advanced Microwave Sounding Unit-A (AMSU-A) brightness temperature in "window channels". The data cover a time period from...

  15. Analysis of sound data streamed over the network

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2013-01-01

    Full Text Available In this paper we inspect a difference between original sound recording and signal captured after streaming this original recording over a network loaded with a heavy traffic. There are several kinds of failures occurring in the captured recording caused by network congestion. We try to find a method how to evaluate correctness of streamed audio. Usually there are metrics based on a human perception of a signal such as “signal is clear, without audible failures”, “signal is having some failures but it is understandable”, or “signal is inarticulate”. These approaches need to be statistically evaluated on a broad set of respondents, which is time and resource consuming. We try to propose some metrics based on signal properties allowing us to compare the original and captured recording. We use algorithm called Dynamic Time Warping (Müller, 2007 commonly used for time series comparison in this paper. Some other time series exploration approaches can be found in (Fejfar, 2011 and (Fejfar, 2012. The data was acquired in our network laboratory simulating network traffic by downloading files, streaming audio and video simultaneously. Our former experiment inspected Quality of Service (QoS and its impact on failures of received audio data stream. This experiment is focused on the comparison of sound recordings rather than network mechanism.We focus, in this paper, on a real time audio stream such as a telephone call, where it is not possible to stream audio in advance to a “pool”. Instead it is necessary to achieve as small delay as possible (between speaker voice recording and listener voice replay. We are using RTP protocol for streaming audio.

  16. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  17. Neonatal apneic seizure of occipital lobe origin: continuous video-EEG recording.

    Science.gov (United States)

    Castro Conde, José Ramón; González-Hernández, Tomás; González Barrios, Desiré; González Campo, Candelaria

    2012-06-01

    We present 2 term newborn infants with apneic seizure originating in the occipital lobe that was diagnosed by video-EEG. One infant had ischemic infarction in the distribution of the posterior cerebral artery, extending to the cingulate gyrus. In the other infant, only transient occipital hyperechogenicity was observed by using neurosonography. In both cases, although the critical EEG discharge was observed at the occipital level, the infants presented no clinical manifestations. In patient 1, the discharge extended to the temporal lobe first, with subtle motor manifestations and tachycardia, then synchronously to both hemispheres (with bradypnea/hypopnea), and the background EEG activity became suppressed, at which point the infant experienced apnea. In patient 2, background EEG activity became suppressed right at the end of the focal discharge, coinciding with the appearance of apnea. In neither case did the clinical description by observers coincide with video-EEG findings. The existence of connections between the posterior limbic cortex and the temporal lobe and midbrain respiratory centers may explain the clinical symptoms recorded in these 2 cases. The novel features reported here include video-EEG capture of apneic seizure, ischemic lesion in the territory of the posterior cerebral artery as the cause of apneic seizure, and the appearance of apnea when the epileptiform ictal discharge extended to other cerebral areas or when EEG activity became suppressed. To date, none of these clinical findings have been previously reported. We believe this pathology may in fact be fairly common, but that video-EEG monitoring is essential for diagnosis.

  18. Point-of-View Recording Devices for Intraoperative Neurosurgical Video Capture

    Directory of Open Access Journals (Sweden)

    Jose Luis Porras

    2016-10-01

    Full Text Available AbstractIntroduction: The ability to record and stream neurosurgery is an unprecedented opportunity to further research, medical education, and quality improvement. Here, we appraise the ease of implementation of existing POV devices when capturing and sharing procedures from the neurosurgical operating room, and detail their potential utility in this context.Methods: Our neurosurgical team tested and critically evaluated features of the Google Glass and Panasonic HX-A500 cameras including ergonomics, media quality, and media sharing in both the operating theater and the angiography suite.Results: Existing devices boast several features that facilitate live recording and streaming of neurosurgical procedures. Given that their primary application is not intended for the surgical environment, we identified a number of concrete, yet improvable, limitations.Conclusion: The present study suggests that neurosurgical video capture and live streaming represents an opportunity to contribute to research, education, and quality improvement. Despite this promise, shortcomings render existing devices impractical for serious consideration. We describe the features that future recording platforms should possess to improve upon existing technology.

  19. Nesting behavior of Palila, as assessed from video recordings

    Science.gov (United States)

    Laut, M.E.; Banko, P.C.; Gray, E.M.

    2003-01-01

    We quantified nesting behavior of Palila (Loxiodes bailleui), an endangered Hawaiian honeycreeper, by recording at nests during three breeding seasons using a black-and-white video camera connected to a Videocassette recorder. A total of seven nests was observed. We measured the following factors for daylight hours: percentage of time the female was on the nest (attendance), length of attendance bouts by the female, length of nest recesses, and adult provisioning rates. Comparisons were made between three stages of the 40-day nesting cycle: incubation (day 1-day 16), early nestling stage (day 17-day 30 [i.e., nestlings ??? 14 days old]), and late nestling stage (day 31-day 40 [i.e., nestlings > 14 days old]). Of seven nests observed, four fledged at least one nestling and three failed. One of these failed nests was filmed being depredated by a feral cat (Felis catus). Female nest attendance was near 82% during the incubation stage and decreased to 21% as nestlings aged. We did not detect a difference in attendance bout length between stages of the nesting cycle. Mean length of nest recesses increased from 4.5 min during the incubation stage to over 45 min during the late nestling stage. Mean number of nest recesses per hour ranged from 1.6 to 2.0. Food was delivered to nestlings by adults an average of 1.8 times per hour for the early nestling stage and 1.5 times per hour during the late nestling stage and did not change over time. Characterization of parental behavior by video had similarities to but also key differences from findings taken from blind observations. Results from this study will facilitate greater understanding of Palila reproductive strategies.

  20. Video event data recording of a taxi driver used for diagnosis of epilepsy

    Directory of Open Access Journals (Sweden)

    Kotaro Sakurai

    2014-01-01

    Full Text Available A video event data recorder (VEDR in a motor vehicle records images before and after a traffic accident. This report describes a taxi driver whose seizures were recorded by VEDR, which was extremely useful for the diagnosis of epilepsy. The patient was a 63-year-old right-handed Japanese male taxi driver. He collided with a streetlight. Two years prior to this incident, he raced an engine for a long time while parked. The VEDR enabled confirmation that the accidents depended on an epileptic seizure and he was diagnosed with symptomatic localization-related epilepsy. The VEDR is useful not only for traffic accident evidence; it might also contribute to a driver's health care and road safety.

  1. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J.M.

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  2. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  3. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  4. Video-EEG recording: a four-year clinical audit.

    LENUS (Irish Health Repository)

    O'Rourke, K

    2012-02-03

    In the setting of a regional neurological unit without an epilepsy surgery service as in our case, video-EEG telemetry is undertaken for three main reasons; to investigate whether frequent paroxysmal events represent seizures when there is clinical doubt, to attempt anatomical localization of partial seizures when standard EEG is unhelpful, and to attempt to confirm that seizures are non-epileptic when this is suspected. A clinical audit of all telemetry performed over a four-year period was carried out, in order to determine the clinical utility of this aspect of the service and to determine means of improving effectiveness in the unit. Analysis of the data showed a high rate of negative studies with no attacks recorded. Of the positive studies approximately 50% showed non-epileptic attacks. Strategies for improving the rate of positive investigations are discussed.

  5. Reduction of heart sound interference from lung sound signals using empirical mode decomposition technique.

    Science.gov (United States)

    Mondal, Ashok; Bhattacharya, P S; Saha, Goutam

    2011-01-01

    During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.

  6. [Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918] / Tõnu Tannberg

    Index Scriptorium Estoniae

    Tannberg, Tõnu, 1961-

    2013-01-01

    Arvustus: Encapsulated voices : Estonian sound recordings from the German prisoner-of-war camps in 1916-1918 (Das Baltikum in Geschichte und Gegenwart, 5). Hrsg. von Jaan Ross. Böhlau Verlag. Köln, Weimar und Wien 2012

  7. Fault Diagnosis of Motor Bearing by Analyzing a Video Clip

    Directory of Open Access Journals (Sweden)

    Siliang Lu

    2016-01-01

    Full Text Available Conventional bearing fault diagnosis methods require specialized instruments to acquire signals that can reflect the health condition of the bearing. For instance, an accelerometer is used to acquire vibration signals, whereas an encoder is used to measure motor shaft speed. This study proposes a new method for simplifying the instruments for motor bearing fault diagnosis. Specifically, a video clip recording of a running bearing system is captured using a cellphone that is equipped with a camera and a microphone. The recorded video is subsequently analyzed to obtain the instantaneous frequency of rotation (IFR. The instantaneous fault characteristic frequency (IFCF of the defective bearing is obtained by analyzing the sound signal that is recorded by the microphone. The fault characteristic order is calculated by dividing IFCF by IFR to identify the fault type of the bearing. The effectiveness and robustness of the proposed method are verified by a series of experiments. This study provides a simple, flexible, and effective solution for motor bearing fault diagnosis. Given that the signals are gathered using an affordable and accessible cellphone, the proposed method is proven suitable for diagnosing the health conditions of bearing systems that are located in remote areas where specialized instruments are unavailable or limited.

  8. AFSC/ABL: Salisbury Sound sponge recovery

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — In 1995, an area of the seafloor near Salisbury Sound was trawled to identify immediate effects on large, erect sponges and sea whips. Video transects were made in...

  9. JINGLE: THE SOUNDING SYMBOL

    Directory of Open Access Journals (Sweden)

    Bysko Maxim V.

    2013-12-01

    Full Text Available The article considers the role of jingles in the industrial era, from the occurrence of the regular radio broadcasting, sound films and television up of modern video games, audio and video podcasts, online broadcasts, and mobile communications. Jingles are researched from the point of view of the theory of symbols: the forward motion is detected in the process of development of jingles from the social symbols (radio callsigns to the individual signs-images (ringtones. The role of technical progress in the formation of jingles as important cultural audio elements of modern digital civilization.

  10. Video Game Accessibility: A Legal Approach

    Directory of Open Access Journals (Sweden)

    George Powers

    2015-02-01

    Full Text Available Video game accessibility may not seem of significance to some, and it may sound trivial to anyone who does not play video games. This assumption is false. With the digitalization of our culture, video games are an ever increasing part of our life. They contribute to peer to peer interactions, education, music and the arts. A video game can be created by hundreds of musicians and artists, and they can have production budgets that exceed modern blockbuster films. Inaccessible video games are analogous to movie theaters without closed captioning or accessible facilities. The movement to have accessible video games is small, unorganized and misdirected. Just like the other battles to make society accessible were accomplished through legislation and law, the battle for video game accessibility must be focused toward the law and not the market.

  11. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  12. WT-Bird. Bird collision recording for offshore wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Wiggelinkhuizen, E.J.; Rademakers, L.W.M.M.; Barhorst, S.A.M. [ECN Wind Energy, Petten (Netherlands); Den Boon, H.J. [E-Connection Project, Bunnik (Netherlands); Dirksen, S. [Bureau Waardenburg, Culemborg (Netherlands); Schekkerman, H. [Alterra, Wageningen (Netherlands)

    2006-03-15

    A new method for registration of bird collisions has been developed using video cameras and microphones combined with event triggering by acoustic vibration measurement. Remote access to the recorded images and sounds makes it possible to count the number of collisions as well as to identify the species. Currently a prototype system is being tested on an offshore-scale land-based wind turbine using bird dummies. After these tests we planned to perform endurance tests on other land-based turbines under offshore-like conditions.

  13. Developing an Interface to Order and Document Health Education Videos in the Electronic Health Record.

    Science.gov (United States)

    Wojcik, Lauren

    2015-01-01

    Transitioning to electronic health records (EHRs) provides an opportunity for health care systems to integrate educational content available on interactive patient systems (IPS) with the medical documentation system. This column discusses how one hospital simplified providers' workflow by making it easier to order educational videos and ensure that completed education is documented within the medical record. Integrating the EHR and IPS streamlined the provision of patient education, improved documentation, and supported the organization in meeting core requirements for Meaningful Use.

  14. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  15. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  16. Characteristics of phenomenon and sound in microbubble emission boiling

    International Nuclear Information System (INIS)

    Zhu Guangyu; Sun Licheng; Tang Jiguo

    2014-01-01

    Background: Nowadays, the efficient heat transfer technology is required in nuclear energy. Therefore, micro-bubble emission boiling (MEB) is getting more attentions from many researchers due to its extremely high heat-transfer dissipation capability. Purpose: An experimental setup was built up to study the correspondences between the characteristics on the amplitude spectrum of boiling sound in different boiling modes. Methods: The heat element was a copper block heated by four Si-C heaters. The upper of the copper block was a cylinder with the diameter of 10 mm and height of 10 mm. Temperature data were measured by three T-type sheathed thermocouples fitted on the upper of the copper block and recorded by NI acquisition system. The temperature of the heating surface was estimated by extrapolating the temperature distribution. Boiling sound data were acquired by hydrophone and processed by Fourier transform. Bubble behaviors were captured by high-speed video camera with light system. Results: In nucleate boiling region, the boiling was not intensive and as a result, the spectra didn't present any peak. While the MEB fully developed on the heating surface, an obvious peak came into being around the frequency of 300 Hz. This could be explained by analyzing the video data. The periodic expansion and collapse into many extremely small bubbles of the vapor film lead to MEB presenting an obvious characteristic peak in its amplitude spectrum. Conclusion: The boiling mode can be distinguished by its amplitude spectrum. When the MEB fully developed, it presented a characteristic peak in its amplitude spectrum around the frequency between 300-400 Hz. This proved that boiling sound of MEB has a close relation with the behavior of vapor film. (authors)

  17. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  18. The Physiology of Fear and Sound

    DEFF Research Database (Denmark)

    Garner, Tom Alexander; Grimshaw, Mark

    2013-01-01

    and systematically altering the game environment in response. This article presents empirical data the analysis of which advocates electrodermal activity and electromyography as suitable physiological measures to work effectively within a computer video game-based biometric feedback loop, within which sound......The potential value of a looping biometric feedback system as a key component of adaptive computer video games is significant. Psychophysiological measures are essential to the development of an automated emotion recognition program, capable of interpreting physiological data into models of affect...

  19. The Use of an Open Field Model to Assess Sound-Induced Fear and Anxiety Associated Behaviors in Labrador Retrievers.

    Science.gov (United States)

    Gruen, Margaret E; Case, Beth C; Foster, Melanie L; Lazarowski, Lucia; Fish, Richard E; Landsberg, Gary; DePuy, Venita; Dorman, David C; Sherman, Barbara L

    2015-01-01

    Previous studies have shown that the playing of thunderstorm recordings during an open-field task elicits fearful or anxious responses in adult beagles. The goal of our study was to apply this open field test to assess sound-induced behaviors in Labrador retrievers drawn from a pool of candidate improvised explosive devices (IED)-detection dogs. Being robust to fear-inducing sounds and recovering quickly is a critical requirement of these military working dogs. This study presented male and female dogs, with 3 minutes of either ambient noise (Days 1, 3 and 5), recorded thunderstorm (Day 2), or gunfire (Day 4) sounds in an open field arena. Behavioral and physiological responses were assessed and compared to control (ambient noise) periods. An observer blinded to sound treatment analyzed video records of the 9-minute daily test sessions. Additional assessments included measurement of distance traveled (activity), heart rate, body temperature, and salivary cortisol concentrations. Overall, there was a decline in distance traveled and heart rate within each day and over the five-day test period, suggesting that dogs habituated to the open field arena. Behavioral postures and expressions were assessed using a standardized rubric to score behaviors linked to canine fear and anxiety. These fear/anxiety scores were used to evaluate changes in behaviors following exposure to a sound stressor. Compared to control periods, there was an overall increase in fear/anxiety scores during thunderstorm and gunfire sound stimuli treatment periods. Fear/anxiety scores were correlated with distance traveled, and heart rate. Fear/anxiety scores in response to thunderstorm and gunfire were correlated. Dogs showed higher fear/anxiety scores during periods after the sound stimuli compared to control periods. In general, candidate IED-detection Labrador retrievers responded to sound stimuli and recovered quickly, although dogs stratified in their response to sound stimuli. Some dogs were

  20. Video Game Preservation in the UK: A Survey of Records Management Practices

    Directory of Open Access Journals (Sweden)

    Alasdair Bachell

    2014-10-01

    Full Text Available Video games are a cultural phenomenon; a medium like no other that has become one of the largest entertainment sectors in the world. While the UK boasts an enviable games development heritage, it risks losing a major part of its cultural output through an inability to preserve the games that are created by the country’s independent games developers. The issues go deeper than bit rot and other problems that affect all digital media; loss of context, copyright and legal issues, and the throwaway culture of the ‘next’ game all hinder the ability of fans and academics to preserve video games and make them accessible in the future. This study looked at the current attitudes towards preservation in the UK’s independent (‘indie’ video games industry by examining current record-keeping practices and analysing the views of games developers. The results show that there is an interest in preserving games, and possibly a desire to do so, but issues of piracy and cost prevent the industry from undertaking preservation work internally, and from allowing others to assume such responsibility. The recommendation made by this paper is not simply for preservation professionals and enthusiasts to collaborate with the industry, but to do so by advocating the commercial benefits that preservation may offer to the industry.

  1. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  2. Organ donation video messaging: differential appeal, emotional valence, and behavioral intention.

    Science.gov (United States)

    Rodrigue, J R; Fleishman, A; Vishnevsky, T; Fitzpatrick, S; Boger, M

    2014-10-01

    Video narratives increasingly are used to draw the public's attention to the need for more registered organ donors. We assessed the differential impact of donation messaging videos on appeal, emotional valence, and organ donation intentions in 781 non-registered adults. Participants watched six videos (four personal narratives, one informational video without personal narrative, and one unrelated to donation) with or without sound (subtitled), randomly sequenced to minimize order effects. We assessed appeal, emotional valence, readiness to register as organ donors, and donation information-seeking behavior. Compared to other video types, one featuring a pediatric transplant recipient (with or without sound) showed more favorable appeal (p emotional valence (p emotion (OR = 1.05, 95% CI = 1.03, 1.07, p < 0.001) were significant multivariable predictors of clicking through to the donation website. Brief, one-min videos can have a very dramatic and positive impact on willingness to consider donation and behavioral intentions to register as an organ donor. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Communicating Earth Science Through Music: The Use of Environmental Sound in Science Outreach

    Science.gov (United States)

    Brenner, C.

    2017-12-01

    The need for increased public understanding and appreciation of Earth science has taken on growing importance over the last several decades. Human society faces critical environmental challenges, both near-term and future, in areas such as climate change, resource allocation, geohazard threat and the environmental degradation of ecosystems. Science outreach is an essential component to engaging both policymakers and the public in the importance of managing these challenges. However, despite considerable efforts on the part of scientists and outreach experts, many citizens feel that scientific research and methods are both difficult to understand and remote from their everyday experience. As perhaps the most accessible of all art forms, music can provide a pathway through which the public can connect to Earth processes. The Earth is not silent: environmental sound can be sampled and folded into musical compositions, either with or without the additional sounds of conventional or electronic instruments. These compositions can be used in conjunction with other forms of outreach (e.g., as soundtracks for documentary videos or museum installations), or simply stand alone as testament to the beauty of geology and nature. As proof of concept, this presentation will consist of a musical composition that includes sounds from various field recordings of wind, swamps, ice and water (including recordings from the inside of glaciers).

  4. The Perception of Sounds in Phonographic Space

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    . The third chapter examines how listeners understand and make sense of phonographic space. In the form of a critique of Pierre Schaeffer and Roger Scruton’s notion of the acousmatic situation, I argue that our experience of recorded music has a twofold focus: the sound-in-itself and the sound’s causality...... the use of metaphors and image schemas in the experience and conceptualisation of phonographic space. With reference to descriptions of recordings by sound engineers, I argue that metaphors are central to our understanding of recorded music. This work is grounded in the tradition of cognitive linguistics......This thesis is about the perception of space in recorded music, with particular reference to stereo recordings of popular music. It explores how sound engineers create imaginary musical environments in which sounds appear to listeners in different ways. It also investigates some of the conditions...

  5. Checking Interceptions and Audio Video Recordings by the Court after Referral

    Directory of Open Access Journals (Sweden)

    Sandra Grădinaru

    2012-05-01

    Full Text Available In any event, the prosecutor and the judiciary should pay particular attention to the risk of theirfalsification, which can be achieved by taking only parts of conversations or communications that took place in thepast and are declared to be registered recently, or by removing parts of conversations or communications, or evenby the translation or removal of images. This is why the legislature provided an express provision for theirverification. Provisions of art. 916 Paragraph 1 Criminal Procedure Code offers the possibility of a technicalexpertise regarding the originality and continuity of the records, at the prosecutor's request, the parties or exofficio, where there are doubts about the correctness of the registration in whole or in part, especially if notsupported by all the evidence. Therefore, audio or video recordings serve themselves as evidence in criminalproceedings, if not appealed or confirmed by technical expertise, if there were doubts about their conformity withreality. In the event that there is lack of expertise from the authenticity of records, they will not be accepted asevidence in solving a criminal case, thus eliminating any probative value of the intercepted conversations andcommunications in that case, by applying article 64 Par. 2 Criminal Procedure Code.

  6. Reconstruction of mechanically recorded sound from an edison cylinder using three dimensional non-contact optical surface metrology

    Energy Technology Data Exchange (ETDEWEB)

    Fadeyev, V.; Haber, C.; Maul, C.; McBride, J.W.; Golden, M.

    2004-04-20

    Audio information stored in the undulations of grooves in a medium such as a phonograph disc record or cylinder may be reconstructed, without contact, by measuring the groove shape using precision optical metrology methods and digital image processing. The viability of this approach was recently demonstrated on a 78 rpm shellac disc using two dimensional image acquisition and analysis methods. The present work reports the first three dimensional reconstruction of mechanically recorded sound. The source material, a celluloid cylinder, was scanned using color coded confocal microscopy techniques and resulted in a faithful playback of the recorded information.

  7. C-space : Fostering new creative paradigms based on recording and sharing 'casual' videos through the internet

    NARCIS (Netherlands)

    Simoes, Bruno; Aksenov, Petr; Santos, Pedro; Arentze, Theo; De Amicis, Raffaele

    2015-01-01

    A key theme in ubiquitous computing is to create smart environments in which there is seamless integration of people, information, and physical reality. In this manuscript, we describe a set of tools that facilitate the creation of such environments, e,g, a service to transform videos recorded with

  8. Analisis Pengaruh Kualitas Video Rolling terhadap Kinerja Stasiun Relay Trans7 Pontianak

    OpenAIRE

    Fajar Maulana

    2013-01-01

    Video Rolling is caused by the exciter which is the generator of audio signal as well as video signal which is produced from the receiver unit, that is a part of the internal disturbance. Video Rolling happens because of the unsyncrhonization between the audio signal and the video signal. The unsyncrhonization is caused by the audio signal is faster than the video signal, so that the place of sound information surpass the destination of video information and the video signal is faster than au...

  9. OLIVE: Speech-Based Video Retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, Jean-Luc; den Hartog, Jurgen; den Hartog, Jeremy; Netter, Klaus

    1999-01-01

    This paper describes the Olive project which aims to support automated indexing of video material by use of human language technologies. Olive is making use of speech recognition to automatically derive transcriptions of the sound tracks, generating time-coded linguistic elements which serve as the

  10. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  11. Usefulness of bowel sound auscultation: a prospective evaluation.

    Science.gov (United States)

    Felder, Seth; Margel, David; Murrell, Zuri; Fleshner, Phillip

    2014-01-01

    Although the auscultation of bowel sounds is considered an essential component of an adequate physical examination, its clinical value remains largely unstudied and subjective. The aim of this study was to determine whether an accurate diagnosis of normal controls, mechanical small bowel obstruction (SBO), or postoperative ileus (POI) is possible based on bowel sound characteristics. Prospectively collected recordings of bowel sounds from patients with normal gastrointestinal motility, SBO diagnosed by computed tomography and confirmed at surgery, and POI diagnosed by clinical symptoms and a computed tomography without a transition point. Study clinicians were instructed to categorize the patient recording as normal, obstructed, ileus, or not sure. Using an electronic stethoscope, bowel sounds of healthy volunteers (n = 177), patients with SBO (n = 19), and patients with POI (n = 15) were recorded. A total of 10 recordings randomly selected from each category were replayed through speakers, with 15 of the recordings duplicated to surgical and internal medicine clinicians (n = 41) blinded to the clinical scenario. The sensitivity, positive predictive value, and intra-rater variability were determined based on the clinician's ability to properly categorize the bowel sound recording when blinded to additional clinical information. Secondary outcomes were the clinician's perceived level of expertise in interpreting bowel sounds. The overall sensitivity for normal, SBO, and POI recordings was 32%, 22%, and 22%, respectively. The positive predictive value of normal, SBO, and POI recordings was 23%, 28%, and 44%, respectively. Intra-rater reliability of duplicated recordings was 59%, 52%, and 53% for normal, SBO, and POI, respectively. No statistically significant differences were found between the surgical and internal medicine clinicians for sensitivity, positive predictive value, or intra-rater variability. Overall, 44% of clinicians reported that they rarely listened

  12. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  13. Simultaneous recordings of human microsaccades and drifts with a contemporary video eye tracker and the search coil technique.

    Directory of Open Access Journals (Sweden)

    Michael B McCamy

    Full Text Available Human eyes move continuously, even during visual fixation. These "fixational eye movements" (FEMs include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs.

  14. Segmentation of expiratory and inspiratory sounds in baby cry audio recordings using hidden Markov models.

    Science.gov (United States)

    Aucouturier, Jean-Julien; Nonaka, Yulri; Katahira, Kentaro; Okanoya, Kazuo

    2011-11-01

    The paper describes an application of machine learning techniques to identify expiratory and inspiration phases from the audio recording of human baby cries. Crying episodes were recorded from 14 infants, spanning four vocalization contexts in their first 12 months of age; recordings from three individuals were annotated manually to identify expiratory and inspiratory sounds and used as training examples to segment automatically the recordings of the other 11 individuals. The proposed algorithm uses a hidden Markov model architecture, in which state likelihoods are estimated either with Gaussian mixture models or by converting the classification decisions of a support vector machine. The algorithm yields up to 95% classification precision (86% average), and its ability generalizes over different babies, different ages, and vocalization contexts. The technique offers an opportunity to quantify expiration duration, count the crying rate, and other time-related characteristics of baby crying for screening, diagnosis, and research purposes over large populations of infants.

  15. Factors that Influence Learning Satisfaction Delivered by Video Streaming Technology

    Science.gov (United States)

    Keenan, Daniel Stephen

    2010-01-01

    In 2005, over 100,000 e-Learning courses were offered in over half of all U.S. postsecondary education institutions with nearly 90% of all community colleges and four year institutions offering online education. Streaming video is commonplace across the internet offering seamless video and sound anywhere connectivity is available effectively…

  16. Video and Sound Production: Flip out! Game on!

    Science.gov (United States)

    Hunt, Marc W.

    2013-01-01

    The author started teaching TV and sound production in a career and technical education (CTE) setting six years ago. The first couple months of teaching provided a steep learning curve for him. He is highly experienced in his industry, but teaching the content presented a new set of obstacles. His students had a broad range of abilities,…

  17. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  18. Enhancing engagement in multimodality environments by sound movement in a virtual space

    DEFF Research Database (Denmark)

    Götzen, Amalia De

    2004-01-01

    of instrumental sounds - has allowed space as a musical instrumental practice to flourish. Electro-acoustic technologies let composers explore new listening dimensions and consider the sounds coming from loudspeakers as possessing different logical meanings from the sounds produced by traditional instruments....... Medea, Adriano Guarnieri's "video opera", is an innovative work stemming from research in multimedia that demonstrates the importance and amount of research dedicated to sound movement in space. Medea is part of the Multi-sensory Expressive Gesture Application project (http://www.megaproject.org). Among...

  19. Observations on abundance of bluntnose sixgill sharks, Hexanchus griseus, in an urban waterway in Puget Sound, 2003-2005.

    Science.gov (United States)

    Griffing, Denise; Larson, Shawn; Hollander, Joel; Carpenter, Tim; Christiansen, Jeff; Doss, Charles

    2014-01-01

    The bluntnose sixgill shark, Hexanchus griseus, is a widely distributed but poorly understood large, apex predator. Anecdotal reports of diver-shark encounters in the late 1990's and early 2000's in the Pacific Northwest stimulated interest in the normally deep-dwelling shark and its presence in the shallow waters of Puget Sound. Analysis of underwater video documenting sharks at the Seattle Aquarium's sixgill research site in Elliott Bay and mark-resight techniques were used to answer research questions about abundance and seasonality. Seasonal changes in relative abundance in Puget Sound from 2003-2005 are reported here. At the Seattle Aquarium study site, 45 sixgills were tagged with modified Floy visual marker tags, along with an estimated 197 observations of untagged sharks plus 31 returning tagged sharks, for a total of 273 sixgill observations recorded. A mark-resight statistical model based on analysis of underwater video estimated a range of abundance from a high of 98 sharks seen in July of 2004 to a low of 32 sharks seen in March of 2004. Both analyses found sixgills significantly more abundant in the summer months at the Seattle Aquarium's research station.

  20. Is it acceptable to video-record palliative care consultations for research and training purposes? A qualitative interview study exploring the views of hospice patients, carers and clinical staff.

    Science.gov (United States)

    Pino, Marco; Parry, Ruth; Feathers, Luke; Faull, Christina

    2017-09-01

    Research using video recordings can advance understanding of healthcare communication and improve care, but making and using video recordings carries risks. To explore views of hospice patients, carers and clinical staff about whether videoing patient-doctor consultations is acceptable for research and training purposes. We used semi-structured group and individual interviews to gather hospice patients, carers and clinical staff views. We used Braun and Clark's thematic analysis. Interviews were conducted at one English hospice to inform the development of a larger video-based study. We invited patients with capacity to consent and whom the care team judged were neither acutely unwell nor severely distressed (11), carers of current or past patients (5), palliative medicine doctors (7), senior nurses (4) and communication skills educators (5). Participants viewed video-based research on communication as valuable because of its potential to improve communication, care and staff training. Video-based research raised concerns including its potential to affect the nature and content of the consultation and threats to confidentiality; however, these were not seen as sufficient grounds for rejecting video-based research. Video-based research was seen as acceptable and useful providing that measures are taken to reduce possible risks across the recruitment, recording and dissemination phases of the research process. Video-based research is an acceptable and worthwhile way of investigating communication in palliative medicine. Situated judgements should be made about when it is appropriate to involve individual patients and carers in video-based research on the basis of their level of vulnerability and ability to freely consent.

  1. Investigating the relationship between pressure force and acoustic waveform in footstep sounds

    DEFF Research Database (Denmark)

    Grani, Francesco; Serafin, Stefania; Götzen, Amalia De

    2013-01-01

    In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair o...... of sandals embedded with six pressure sensors each. Investigations of the relationships between recorded force and footstep sounds is presented, together with several possible applications of the system.......In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair...

  2. Does sharing the electronic health record in the consultation enhance patient involvement? A mixed-methods study using multichannel video recording and in-depth interviews in primary care.

    Science.gov (United States)

    Milne, Heather; Huby, Guro; Buckingham, Susan; Hayward, James; Sheikh, Aziz; Cresswell, Kathrin; Pinnock, Hilary

    2016-06-01

    Sharing the electronic health-care record (EHR) during consultations has the potential to facilitate patient involvement in their health care, but research about this practice is limited. We used multichannel video recordings to identify examples and examine the practice of screen-sharing within 114 primary care consultations. A subset of 16 consultations was viewed by the general practitioner and/or patient in 26 reflexive interviews. Screen-sharing emerged as a significant theme and was explored further in seven additional patient interviews. Final analysis involved refining themes from interviews and observation of videos to understand how screen-sharing occurred, and its significance to patients and professionals. Eighteen (16%) of 114 videoed consultations involved instances of screen-sharing. Screen-sharing occurred in six of the subset of 16 consultations with interviews and was a significant theme in 19 of 26 interviews. The screen was shared in three ways: 'convincing' the patient of a diagnosis or treatment; 'translating' between medical and lay understandings of disease/medication; and by patients 'verifying' the accuracy of the EHR. However, patients and most GPs perceived the screen as the doctor's domain, not to be routinely viewed by the patient. Screen-sharing can facilitate patient involvement in the consultation, depending on the way in which sharing comes about, but the perception that the record belongs to the doctor is a barrier. To exploit the potential of sharing the screen to promote patient involvement, there is a need to reconceptualise and redesign the EHR. © 2014 The Authors Health Expectations Published by John Wiley & Sons Ltd.

  3. Nursing students' self-evaluation using a video recording of foley catheterization: effects on students' competence, communication skills, and learning motivation.

    Science.gov (United States)

    Yoo, Moon Sook; Yoo, Il Young; Lee, Hyejung

    2010-07-01

    An opportunity for a student to evaluate his or her own performance enhances self-awareness and promotes self-directed learning. Using three outcome measures of competency of procedure, communication skills, and learning motivation, the effects of self-evaluation using a video recording of the student's Foley catheterization was investigated in this study. The students in the experimental group (n = 20) evaluated their Foley catheterization performance by reviewing the video recordings of their own performance, whereas students in the control group (n = 20) received written evaluation guidelines only. The results showed that the students in the experimental group had better scores on competency (p communication skills (p performance developed by reviewing a videotape appears to increase the competency of clinical skills in nursing students. Copyright 2010, SLACK Incorporated.

  4. Classifying Normal and Abnormal Status Based on Video Recordings of Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-01-01

    Full Text Available Based on video recordings of the movement of the patients with epilepsy, this paper proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients. The scheme first extracts local features and holistic features, which are complementary to each other. Afterwards, a support vector machine is applied to classification. Based on the experimental results, this scheme obtains a satisfactory classification result and provides a fundamental analysis towards the human-robot interaction with socially assistive robots in caring the patients with epilepsy (or other patients with brain disorders in order to protect them from injury.

  5. THE MODEL FOR DIEGETIC ANALYSIS OF SOUNDS IN SCREEN MEDIA

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2013-12-01

    Full Text Available This article includes the analysis of the relationship between representational visual spaces and sounds in screen media. The methodology presented in this paper can be used for the accurate classification and differentiation for screen sounds, as well as for the general analysis of the specific sound of screen media. For this, the concept of «diegesis» is used. It allows us to analyze the spatial specificity of audiovisual images in cinematographic works and the spatial-functional interactive action in video games and others multimedia.

  6. Avatar Weight Estimates based on Footstep Sounds in Three Presentation Formats

    DEFF Research Database (Denmark)

    Sikström, Erik; Götzen, Amalia De; Serafin, Stefania

    2015-01-01

    When evaluating a sound design for virtual environment, the context where it is to be implemented in may have an influence on how it may be perceived. In this paper we perform an experiment comparing three presentation formats (audio only, video with audio and an interactive immersive VR format......) and their influences on a sound design evaluation task concerning footstep sounds. The evaluation involved estimating the perceived weight of a virtual avatar seen from a first person perspective, as well as the suitability of the sound effect relative to the context. The results show significant differences for three...

  7. Music as Active Information Resource for Players in Video Games

    Science.gov (United States)

    Nagorsnick, Marian; Martens, Alke

    2015-01-01

    In modern video games, music can come in different shapes: it can be developed on a very high compositional level, with sophisticated sound elements like in professional film music; it can be developed on a very coarse level, underlying special situations (like danger or attack); it can also be automatically generated by sound engines. However, in…

  8. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  9. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  10. Sound production in recorder-like instruments : II. a simulation model

    NARCIS (Netherlands)

    Verge, M.P.; Hirschberg, A.; Causse, R.

    1997-01-01

    A simple one-dimensional representation of recorderlike instruments, that can be used for sound synthesis by physical modeling of flutelike instruments, is presented. This model combines the effects on the sound production by the instrument of the jet oscillations, vortex shedding at the edge of the

  11. Does seeing an Asian face make speech sound more accented?

    Science.gov (United States)

    Zheng, Yi; Samuel, Arthur G

    2017-08-01

    Prior studies have reported that seeing an Asian face makes American English sound more accented. The current study investigates whether this effect is perceptual, or if it instead occurs at a later decision stage. We first replicated the finding that showing static Asian and Caucasian faces can shift people's reports about the accentedness of speech accompanying the pictures. When we changed the static pictures to dubbed videos, reducing the demand characteristics, the shift in reported accentedness largely disappeared. By including unambiguous items along with the original ambiguous items, we introduced a contrast bias and actually reversed the shift, with the Asian-face videos yielding lower judgments of accentedness than the Caucasian-face videos. By changing to a mixed rather than blocked design, so that the ethnicity of the videos varied from trial to trial, we eliminated the difference in accentedness rating. Finally, we tested participants' perception of accented speech using the selective adaptation paradigm. After establishing that an auditory-only accented adaptor shifted the perception of how accented test words are, we found that no such adaptation effect occurred when the adapting sounds relied on visual information (Asian vs. Caucasian videos) to influence the accentedness of an ambiguous auditory adaptor. Collectively, the results demonstrate that visual information can affect the interpretation, but not the perception, of accented speech.

  12. The frequency range of TMJ sounds.

    Science.gov (United States)

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P 375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  13. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  14. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  15. Infra-sound cancellation and mitigation in wind turbines

    Science.gov (United States)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  16. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  17. Pacific and Atlantic herring produce burst pulse sounds.

    Science.gov (United States)

    Wilson, Ben; Batty, Robert S; Dill, Lawrence M

    2004-02-07

    The commercial importance of Pacific and Atlantic herring (Clupea pallasii and Clupea harengus) has ensured that much of their biology has received attention. However, their sound production remains poorly studied. We describe the sounds made by captive wild-caught herring. Pacific herring produce distinctive bursts of pulses, termed Fast Repetitive Tick (FRT) sounds. These trains of broadband pulses (1.7-22 kHz) lasted between 0.6 s and 7.6 s. Most were produced at night; feeding regime did not affect their frequency, and fish produced FRT sounds without direct access to the air. Digestive gas or gulped air transfer to the swim bladder, therefore, do not appear to be responsible for FRT sound generation. Atlantic herring also produce FRT sounds, and video analysis showed an association with bubble expulsion from the anal duct region (i.e. from the gut or swim bladder). To the best of the authors' knowledge, sound production by such means has not previously been described. The function(s) of these sounds are unknown, but as the per capita rates of sound production by fish at higher densities were greater, social mediation appears likely. These sounds may have consequences for our understanding of herring behaviour and the effects of noise pollution.

  18. Non-technical skills for obstetricians conducting forceps and vacuum deliveries: qualitative analysis by interviews and video recordings.

    Science.gov (United States)

    Bahl, Rachna; Murphy, Deirdre J; Strachan, Bryony

    2010-06-01

    Non-technical skills are cognitive and social skills required in an operational task. These skills have been identified and taught in the surgical domain but are of particular relevance to obstetrics where the patient is awake, the partner is present and the clinical circumstances are acute and often stressful. The aim of this study was to define the non-technical skills of an operative vaginal delivery (forceps or vacuum) to facilitate transfer of skills from expert obstetricians to trainee obstetricians. Qualitative study using interviews and video recordings. The study was conducted at two university teaching hospitals (St. Michael's Hospital, Bristol and Ninewells Hospital, Dundee). Participants included 10 obstetricians and eight midwives identified as experts in conducting or supporting operative vaginal deliveries. Semi-structured interviews were carried out using routine clinical scenarios. The experts were also video recorded conducting forceps and vacuum deliveries in a simulation setting. The interviews and video recordings were transcribed verbatim and analysed using thematic coding. The anonymised data were independently coded by the three researchers and then compared for consistency of interpretation. The experts reviewed the coded data for respondent validation and clarification. The themes that emerged were used to identify the non-technical skills required for conducting an operative vaginal delivery. The final skills list was classified into seven main categories. Four categories (situational awareness, decision making, task management, and team work and communication) were similar to the categories identified in surgery. Three further categories unique to obstetrics were also identified (professional relationship with the woman, maintaining professional behaviour and cross-monitoring of performance). This explicitly defined skills taxonomy could aid trainees' understanding of the non-technical skills to be considered when conducting an operative

  19. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  20. 37 CFR 382.2 - Royalty fees for the digital performance of sound recordings and the making of ephemeral...

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Royalty fees for the digital... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.2 Royalty fees for the... monthly royalty fee for the public performance of sound recordings pursuant to 17 U.S.C. 114(d)(2) and the...

  1. A Coincidental Sound Track for "Time Flies"

    Science.gov (United States)

    Cardany, Audrey Berger

    2014-01-01

    Sound tracks serve a valuable purpose in film and video by helping tell a story, create a mood, and signal coming events. Holst's "Mars" from "The Planets" yields a coincidental soundtrack to Eric Rohmann's Caldecott-winning book, "Time Flies." This pairing provides opportunities for upper elementary and…

  2. Algorithm for Video Summarization of Bronchoscopy Procedures

    Directory of Open Access Journals (Sweden)

    Leszczuk Mikołaj I

    2011-12-01

    Full Text Available Abstract Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions

  3. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  4. Motivational Videos and the Library Media Specialist: Teachers and Students on Film--Take 1

    Science.gov (United States)

    Bohot, Cameron Brooke; Pfortmiller, Michelle

    2009-01-01

    Today's students are bombarded with digital imagery and sound nearly 24 hours of the day. Video use in the classroom is engaging, and a teacher can instantly grab her students' attention. The content of the videos comes from many sources; the curriculum, the student handbook, and even the school rules. By creating the videos, teachers are not only…

  5. The cinematic soundscape: conceptualising the use of sound in Indian films

    OpenAIRE

    Budhaditya Chattopadhyay

    2012-01-01

    This article examines the trajectories of sound practice in Indian cinema and conceptualises the use of sound since the advent of talkies. By studying and analysing a number of sound- films from different technological phases of direct recording, magnetic recording and present- day digital recording, the article proposes three corresponding models that are developed on the basis of observations on the use of sound in Indian cinema. These models take their point of departure in specific phases...

  6. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  7. Investigating interactional competence using video recordings in ESL classrooms to enhance communication

    Science.gov (United States)

    Krishnasamy, Hariharan N.

    2016-08-01

    Interactional competence, or knowing and using the appropriate skills for interaction in various communication situations within a given speech community and culture is important in the field of business and professional communication [1], [2]. Similar to many developing countries in the world, Malaysia is a growing economy and undergraduates will have to acquire appropriate communication skills. In this study, two aspects of the interactional communicative competence were investigated, that is the linguistic and paralinguistic behaviors in small group communication as well as conflict management in small group communication. Two groups of student participants were given a problem-solving task based on a letter of complaint. The two groups of students were video recorded during class hours for 40 minutes. The videos and transcription of the group discussions were analyzed to examine the use of language and interaction in small groups. The analysis, findings and interpretations were verified with three lecturers in the field of communication. The results showed that students were able to accomplish the given task using verbal and nonverbal communication. However, participation was unevenly distributed with two students talking for less than a minute. Negotiation was based more on alternative views and consensus was easily achieved. In concluding, suggestions are given on ways to improve English language communication.

  8. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  9. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  10. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  11. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  12. Visual bias in subjective assessments of automotive sounds

    DEFF Research Database (Denmark)

    Ellermeier, Wolfgang; Legarth, Søren Vase

    2006-01-01

    In order to evaluate how strong the influence of visual input on sound quality evaluation may be, a naive sample of 20 participants was asked to judge interior automotive sound recordings while simultaneously being exposed to pictures of cars. twenty-two recordings of second-gear acceleration...

  13. Monitoring and assessment of ingestive chewing sounds for prediction of herbage intake rate in grazing cattle.

    Science.gov (United States)

    Galli, J R; Cangiano, C A; Pece, M A; Larripa, M J; Milone, D H; Utsumi, S A; Laca, E A

    2018-05-01

    Accurate measurement of herbage intake rate is critical to advance knowledge of the ecology of grazing ruminants. This experiment tested the integration of behavioral and acoustic measurements of chewing and biting to estimate herbage dry matter intake (DMI) in dairy cows offered micro-swards of contrasting plant structure. Micro-swards constructed with plastic pots were offered to three lactating Holstein cows (608±24.9 kg of BW) in individual grazing sessions (n=48). Treatments were a factorial combination of two forage species (alfalfa and fescue) and two plant heights (tall=25±3.8 cm and short=12±1.9 cm) and were offered on a gradient of increasing herbage mass (10 to 30 pots) and number of bites (~10 to 40 bites). During each grazing session, sounds of biting and chewing were recorded with a wireless microphone placed on the cows' foreheads and a digital video camera to allow synchronized audio and video recordings. Dry matter intake rate was higher in tall alfalfa than in the other three treatments (32±1.6 v. 19±1.2 g/min). A high proportion of jaw movements in every grazing session (23 to 36%) were compound jaw movements (chew-bites) that appeared to be a key component of chewing and biting efficiency and of the ability of cows to regulate intake rate. Dry matter intake was accurately predicted based on easily observable behavioral and acoustic variables. Chewing sound energy measured as energy flux density (EFD) was linearly related to DMI, with 74% of EFD variation explained by DMI. Total chewing EFD, number of chew-bites and plant height (tall v. short) were the most important predictors of DMI. The best model explained 91% of the variation in DMI with a coefficient of variation of 17%. Ingestive sounds integrate valuable information to remotely monitor feeding behavior and predict DMI in grazing cows.

  14. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days....... RESULTS: Kappa values for within-day identification of footstrike pattern revealed intra-rater agreement of 0.83-0.88 and inter-rater agreement of 0.50-0.63. Corresponding figures for between-day identification of footstrike pattern were 0.63-0.69 and 0.41-0.53, respectively. Identification of video time...... in 36% of the identifications (kappa=0.41). The 95% limits of agreement for identification of video time frame at initial contact may, at times, allow for different identification of footstrike pattern. Clinicians should, therefore, be encouraged to continue using clinical 2D video setups for intra...

  15. Coupled auralization and virtual video for immersive multimedia displays

    Science.gov (United States)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  16. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  17. "Is That How I Really Sound?": Using iPads for Fluency Practice

    Science.gov (United States)

    Ness, Molly

    2017-01-01

    This teaching tip showcases how students use iPads to video record themselves orally reading. In the Record, Listen, Reflect process, students conduct repeated readings with a familiar text, watch the recorded video, and conduct running records on themselves. Having an opportunity to watch videos of their own reading gives students a glimpse of…

  18. Individualized music played for agitated patients with dementia: analysis of video-recorded sessions.

    Science.gov (United States)

    Ragneskog, H; Asplund, K; Kihlgren, M; Norberg, A

    2001-06-01

    Many nursing home patients with dementia suffer from symptoms of agitation (e.g. anxiety, shouting, irritability). This study investigated whether individualized music could be used as a nursing intervention to reduce such symptoms in four patients with severe dementia. The patients were video-recorded during four sessions in four periods, including a control period without music, two periods where individualized music was played, and one period where classical music was played. The recordings were analysed by systematic observations and the Facial Action Coding System. Two patients became calmer during some of the individualized music sessions; one patient remained sitting in her armchair longer, and the other patient stopped shouting. For the two patients who were most affected by dementia, the noticeable effect of music was minimal. If the nursing staff succeed in discovering the music preferences of an individual, individualized music may be an effective nursing intervention to mitigate anxiety and agitation for some patients.

  19. Consumer-based technology for distribution of surgical videos for objective evaluation.

    Science.gov (United States)

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  20. Toward brain correlates of natural behavior: fMRI during violent video games.

    Science.gov (United States)

    Mathiak, Klaus; Weber, René

    2006-12-01

    Modern video games represent highly advanced virtual reality simulations and often contain virtual violence. In a significant amount of young males, playing video games is a quotidian activity, making it an almost natural behavior. Recordings of brain activation with functional magnetic resonance imaging (fMRI) during gameplay may reflect neuronal correlates of real-life behavior. We recorded 13 experienced gamers (18-26 years; average 14 hrs/week playing) while playing a violent first-person shooter game (a violent computer game played in self-perspective) by means of distortion and dephasing reduced fMRI (3 T; single-shot triple-echo echo-planar imaging [EPI]). Content analysis of the video and sound with 100 ms time resolution achieved relevant behavioral variables. These variables explained significant signal variance across large distributed networks. Occurrence of violent scenes revealed significant neuronal correlates in an event-related design. Activation of dorsal and deactivation of rostral anterior cingulate and amygdala characterized the mid-frontal pattern related to virtual violence. Statistics and effect sizes can be considered large at these areas. Optimized imaging strategies allowed for single-subject and for single-trial analysis with good image quality at basal brain structures. We propose that virtual environments can be used to study neuronal processes involved in semi-naturalistic behavior as determined by content analysis. Importantly, the activation pattern reflects brain-environment interactions rather than stimulus responses as observed in classical experimental designs. We relate our findings to the general discussion on social effects of playing first-person shooter games. (c) 2006 Wiley-Liss, Inc.

  1. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  2. Medical students' perceptions of video-linked lectures and video-streaming

    Directory of Open Access Journals (Sweden)

    Karen Mattick

    2010-12-01

    Full Text Available Video-linked lectures allow healthcare students across multiple sites, and between university and hospital bases, to come together for the purposes of shared teaching. Recording and streaming video-linked lectures allows students to view them at a later date and provides an additional resource to support student learning. As part of a UK Higher Education Academy-funded Pathfinder project, this study explored medical students' perceptions of video-linked lectures and video-streaming, and their impact on learning. The methodology involved semi-structured interviews with 20 undergraduate medical students across four sites and five year groups. Several key themes emerged from the analysis. Students generally preferred live lectures at the home site and saw interaction between sites as a major challenge. Students reported that their attendance at live lectures was not affected by the availability of streamed lectures and tended to be influenced more by the topic and speaker than the technical arrangements. These findings will inform other educators interested in employing similar video technologies in their teaching.Keywords: video-linked lecture; video-streaming; student perceptions; decisionmaking; cross-campus teaching.

  3. Sounds from seeing silent motion: Who hears them, and what looks loudest?

    Science.gov (United States)

    Fassnidge, Christopher J; Freeman, Elliot D

    2018-03-09

    Some people hear what they see: car indicator lights, flashing neon shop signs, and people's movements as they walk may all trigger an auditory sensation, which we call the visual-evoked auditory response (vEAR or 'visual ear'). We have conducted the first large-scale online survey (N > 4000) of this little-known phenomenon. We analysed the prevalence of vEAR, what induces it, and what other traits are associated with it. We assessed prevalence by asking whether respondents had previously experienced vEAR. Participants then rated silent videos for vividness of evoked auditory sensations, and answered additional trait questions. Prevalence appeared higher relative to other typical synaesthesias. Prior awareness and video ratings were associated with greater frequency of other synaesthesias, including flashes evoked by sounds, and musical imagery. Higher-rated videos often depicted meaningful events that predicted sounds (e.g., collisions). However, even videos containing abstract flickering or moving patterns could also elicit higher ratings, despite having no predictable association with sounds. Such videos had higher levels of raw 'motion energy' (ME), which we quantified using a simple computational model of motion processing in early visual cortex. Critically, only respondents reporting prior awareness of vEAR tended to show a positive correlation between video ratings and ME. This specific sensitivity to ME suggests that in vEAR, signals from visual motion processing may affect audition relatively directly without requiring higher-level interpretative processes. Our other findings challenge the popular assumption that individuals with synaesthesia are rare and have ideosyncratic patterns of brain hyper-connectivity. Instead, our findings of apparently high prevalence and broad associations with other synaesthesias and traits are jointly consistent with a common dependence on normal variations in physiological mechanisms of disinhibition or excitability of

  4. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0088063)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  5. Seawater Temperature and Salinity Moored Time-Series Records, Collected During 2010 and 2011 in Vieques Sound and Virgin Passage (NODC Accession 0077910)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea-Bird SBE37SM MicroCat Conductivity/Temperature (CT) recorders were deployed between March 2010 and April 2011 on shallow water moorings located in Vieques Sound,...

  6. An open access database for the evaluation of heart sound algorithms.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  7. The Culture Specific Application of Sound in Nigerian Video Movies ...

    African Journals Online (AJOL)

    ... have in recent times become a major object of attraction in terms of artistry, ... The success of this industry could not have been complete without the inputs from ... makers sound engineers and the musicians, who supply the needed music.

  8. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    Science.gov (United States)

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  9. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  10. Modular integrated video system (MIVS) review station

    International Nuclear Information System (INIS)

    Garcia, M.L.

    1988-01-01

    An unattended video surveillance unit, the Modular Integrated Video System (MIVS), has been developed by Sandia National Laboratories for International Safeguards use. An important support element of this system is a semi-automatic Review Station. Four component modules, including an 8 mm video tape recorder, a 4-inch video monitor, a power supply and control electronics utilizing a liquid crystal display (LCD) are mounted in a suitcase for probability. The unit communicates through the interactive, menu-driven LCD and may be operated on facility power through the world. During surveillance, the MIVS records video information at specified time intervals, while also inserting consecutive scene numbers and tamper event information. Using either of two available modes of operation, the Review Station reads the inserted information and counts the number of missed scenes and/or tamper events encountered on the tapes, and reports this to the user on the LCD. At the end of a review session, the system will summarize the results of the review, stop the recorder, and advise the user of the completion of the review. In addition, the Review Station will check for any video loss on the tape

  11. Proximal mechanisms for sound production in male Pacific walruses

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Reichmuth, Colleen

    2012-01-01

    features more similar to those found in industrial work places than in nature. The patterned knocks and bells that comprise male songs are not thought to be true vocalizations, but rather, sounds produced with structures other than the vocal tract and larynx. To determine how male walruses produce and emit......The songs of male walruses during the breeding season have been noted to have some of the most unusual characteristics that have been observed among mammalian sounds. In contrast to the more guttural vocalizations of most other carnivores, their acoustic displays have impulsive and metallic...... anatomical origins of knocking and bell sounds and gained a mechanistic understanding of how these sounds are generated within the body and transmitted to the environment. These pathways are illustrated with acoustic and video data and considered with respect to the unique biology of this species....

  12. Engineering aspect of the microwave ionosphere nonlinear interaction experiment (MINIX) with a sounding rocket

    Science.gov (United States)

    Nagatomo, Makoto; Kaya, Nobuyuki; Matsumoto, Hiroshi

    The Microwave Ionosphere Nonlinear Interaction Experiment (MINIX) is a sounding rocket experiment to study possible effects of strong microwave fields in case it is used for energy transmission from the Solar Power Satellite (SPS) upon the Earth's atmosphere. Its secondary objective is to develop high power microwave technology for space use. Two rocket-borne magnetrons were used to emit 2.45 GHz microwave in order to make a simulated condition of power transmission from an SPS to a ground station. Sounding of the environment radiated by microwave was conducted by the diagnostic package onboard the daughter unit which was separated slowly from the mother unit. The main design drivers of this experiment were to build such high power equipments in a standard type of sounding rocket, to keep the cost within the budget and to perform a series of experiments without complete loss of the mission. The key technology for this experiment is a rocket-borne magnetron and high voltage converter. Location of position of the daughter unit relative to the mother unit was a difficult requirement for a spin-stabilized rocket. These problems were solved by application of such a low cost commercial products as a magnetron for microwave oven and a video tape recorder and camera.

  13. Ecoacoustic Music for Geoscience: Sonic Physiographies and Sound Casting

    Science.gov (United States)

    Burtner, M.

    2017-12-01

    The author describes specific ecoacoustic applications in his original compositions, Sonic Physiography of a Time-Stretched Glacier (2015), Catalog of Roughness (2017), Sound Cast of Matanuska Glacier (2016) and Ecoacoustic Concerto (Eagle Rock) (2014). Ecoacoustic music uses technology to map systems from nature into music through techniques such as sonification, material amplification, and field recording. The author aspires for this music to be descriptive of the data (as one would expect from a visualization) and also to function as engaging and expressive music/sound art on its own. In this way, ecoacoustic music might provide a fitting accompaniment to a scientific presentation (such as music for a science video) while also offering an exemplary concert hall presentation for a dedicated listening public. The music can at once support the communication of scientific research, and help science make inroads into culture. The author discusses how music created using the data, sounds and methods derived from earth science can recast this research into a sonic art modality. Such music can amplify the communication and dissemination of scientific knowledge by broadening the diversity of methods and formats we use to bring excellent scientific research to the public. Music can also open the public's imagination to science, inspiring curiosity and emotional resonance. Hearing geoscience as music may help a non-scientist access scientific knowledge in new ways, and it can greatly expand the types of venues in which this work can appear. Anywhere music is played - concert halls, festivals, galleries, radio, etc - become a venue for scientific discovery.

  14. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  15. Why live recording sounds better: A case study of Schumann’s Träumerei

    Directory of Open Access Journals (Sweden)

    Haruka eShoda

    2015-01-01

    Full Text Available We explore the concept that artists perform best in front of an audience. The negative effects of performance anxiety are much better known than their related cousin on the other shoulder: the positive effects of social facilitation. The present study, however, reveals a listener's preference for performances recorded in front of an audience. In Study 1, we prepared two types of recordings of Träumerei performed by 13 pianists: recordings in front of an audience and those with no audience. According to the evaluation by 153 listeners, the recordings performed in front of an audience sounded better, suggesting that the presence of an audience enhanced or facilitated the performance. In Study 2, we analyzed pianists' durational and dynamic expressions. According to the functional principal components analyses, we found that the expression of Träumerei consisted of three components: the overall quantity, the cross-sectional contrast between the final and the remaining sections, and the control of the expressive variability. Pianists' expressions were targeted more to the average of the cross-sectional variation in the audience-present than in the audience-absent recordings. In Study 3, we explored a model that explained listeners' responses induced by pianists' acoustical expressions, using path analyses. The final model indicated that the cross-sectional variation of the duration and that of the dynamics determined listeners' evaluations of the quality and the emotionally moving experience, respectively. In line with human's preferences for commonality, the more average the durational expressions were in live recording, the better the listeners' evaluations were regardless of their musical experiences. Only the well-experienced listeners (at least 16 years of musical training were moved more by the deviated dynamic expressions in live recording, suggesting a link between the experienced listener's emotional experience and the unique dynamics in

  16. Video Surveillance: Privacy Issues and Legal Compliance

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2015-01-01

    Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video. There is a...

  17. Energy use of televisions and video cassette recorders in the U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Meier, Alan; Rosen, Karen

    1999-03-01

    In an effort to more accurately determine nationwide energy consumption, the U.S. Department of Energy has recently commissioned studies with the goal of improving its understanding of the energy use of appliances in the miscellaneous end-use category. This study presents an estimate of the residential energy consumption of two of the most common domestic appliances in the miscellaneous end-use category: color televisions (TVs) and video cassette recorders (VCRs). The authors used a bottom-up approach in estimating national TV and VCR energy consumption. First, they obtained estimates of stock and usage from national surveys, while TV and VCR power measurements and other data were recorded at repair and retail shops. Industry-supplied shipment and sales distributions were then used to minimize bias in the power measurement samples. To estimate national TV and VCR energy consumption values, ranges of power draw and mode usage were created to represent situations in homes with more than one unit. Average energy use values for homes with one unit, two units, etc. were calculated and summed to provide estimates of total national TV and VCR energy consumption.

  18. High speed video recording system on a chip for detonation jet engine testing

    Directory of Open Access Journals (Sweden)

    Samsonov Alexander N.

    2018-01-01

    Full Text Available This article describes system on a chip development for high speed video recording purposes. Current research was started due to difficulties in selection of FPGAs and CPUs which include wide bandwidth, high speed and high number of multipliers for real time signal analysis implementation. Current trend of high density silicon device integration will result soon in a hybrid sensor-controller-memory circuit packed in a single chip. This research was the first step in a series of experiments in manufacturing of hybrid devices. The current task is high level syntheses of high speed logic and CPU core in an FPGA. The work resulted in FPGA-based prototype implementation and examination.

  19. Record Desktop Activity as Streaming Videos for Asynchronous, Video-Based Collaborative Learning.

    Science.gov (United States)

    Chang, Chih-Kai

    As Web-based courses using videos have become popular in recent years, the issue of managing audiovisual aids has become noteworthy. The contents of audiovisual aids may include a lecture, an interview, a featurette, an experiment, etc. The audiovisual aids of Web-based courses are transformed into the streaming format that can make the quality of…

  20. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    Science.gov (United States)

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  1. Computer-Aided Video Differential Planimetry

    Science.gov (United States)

    Tobin, Michael; Djoleto, Ben D.

    1984-08-01

    THE VIDEO DIFFERENTIAL PLANIMETER (VDP)1 is a re-mote sensing instrument that can measure minute changes in the area of any object seen by an optical scanning system. The composite video waveforms obtained by scanning the object against a contrasting back-ground are amplified and shaped to yield a sequence of constant amplitude pulses whose polarity distinguishes the studied area from its background and whose varying widths reflect the dynamics of the viewed object. These pulses are passed through a relatively long time-constant capacitor-resistor circuit and are then fed into an integrator. The net integration voltage resulting from the most recent sequence of object-background time pulses is recorded and the integrator is returned to zero at the end of each video frame. If the object's area remains constant throughout the following frame, the integrator's summation will also remain constant. However, if the object's area varies, the positive and negative time pulses entering the integrator will change, and the integrator's summation will vary proportionately. The addition of a computer interface and a video recorder enhances the versatility and the resolving power of the VDP by permitting the repeated study and analysis of selected portions of the recorded data, thereby uncovering the major sources of the object's dynamics. Among the medical and biological procedures for which COMPUTER-AIDED VIDEO DIFFERENTIAL PLANIMETRY is suitable are Ophthalmoscopy, Endoscopy, Microscopy, Plethysmography, etc. A recent research study in Ophthalmoscopy2 will be cited to suggest a useful application of Video Differential Planimetry.

  2. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  3. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We...... observation and instruction (directives) relayed across different spaces; 2) the use of recorded video by participants to visualise, spatialise and localise talk and action that is distant in time and/or space; 3) the translating, stretching and cutting of social experience in and through the situated use...

  4. Two Shared Rapid Turn Taking Sound Interfaces for Novices

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie; Andersen, Hans Jørgen; Raudaskoski, Pirkko Liisa

    2012-01-01

    in an interleaved fashion: Systems A and B used a fuzzy logic algorithm and pattern recognition to respond with modifications of a background rhythms. In an experiment with a pen tablet interface as the music instrument, users aged 10-13 were to tap tones and continue each other’s melody. The sound systems rewarded......This paper presents the results of user interaction with two explorative music environments (sound system A and B) that were inspired from the Banda Linda music tradition in two different ways. The sound systems adapted to how a team of two players improvised and made a melody together...... users sonically, if they managed to add tones to their mutual melody in a rapid turn taking manner with rhythmical patterns. Videos of experiment sessions show that user teams contributed to a melody in ways that resemble conversation. Interaction data show that each sound system made player teams play...

  5. Listening panel agreement and characteristics of lung sounds digitally recorded from children aged 1–59 months enrolled in the Pneumonia Etiology Research for Child Health (PERCH) case–control study

    Science.gov (United States)

    Park, Daniel E; Watson, Nora L; Buck, W Chris; Bunthi, Charatdao; Devendra, Akash; Ebruke, Bernard E; Elhilali, Mounya; Emmanouilidou, Dimitra; Garcia-Prats, Anthony J; Githinji, Leah; Hossain, Lokman; Madhi, Shabir A; Moore, David P; Mulindwa, Justin; Olson, Dan; Awori, Juliet O; Vandepitte, Warunee P; Verwey, Charl; West, James E; Knoll, Maria D; O'Brien, Katherine L; Feikin, Daniel R; Hammit, Laura L

    2017-01-01

    Introduction Paediatric lung sound recordings can be systematically assessed, but methodological feasibility and validity is unknown, especially from developing countries. We examined the performance of acoustically interpreting recorded paediatric lung sounds and compared sound characteristics between cases and controls. Methods Pneumonia Etiology Research for Child Health staff in six African and Asian sites recorded lung sounds with a digital stethoscope in cases and controls. Cases aged 1–59 months had WHO severe or very severe pneumonia; age-matched community controls did not. A listening panel assigned examination results of normal, crackle, wheeze, crackle and wheeze or uninterpretable, with adjudication of discordant interpretations. Classifications were recategorised into any crackle, any wheeze or abnormal (any crackle or wheeze) and primary listener agreement (first two listeners) was analysed among interpretable examinations using the prevalence-adjusted, bias-adjusted kappa (PABAK). We examined predictors of disagreement with logistic regression and compared case and control lung sounds with descriptive statistics. Results Primary listeners considered 89.5% of 792 case and 92.4% of 301 control recordings interpretable. Among interpretable recordings, listeners agreed on the presence or absence of any abnormality in 74.9% (PABAK 0.50) of cases and 69.8% (PABAK 0.40) of controls, presence/absence of crackles in 70.6% (PABAK 0.41) of cases and 82.4% (PABAK 0.65) of controls and presence/absence of wheeze in 72.6% (PABAK 0.45) of cases and 73.8% (PABAK 0.48) of controls. Controls, tachypnoea, >3 uninterpretable chest positions, crying, upper airway noises and study site predicted listener disagreement. Among all interpretable examinations, 38.0% of cases and 84.9% of controls were normal (p<0.0001); wheezing was the most common sound (49.9%) in cases. Conclusions Listening panel and case–control data suggests our methodology is feasible, likely valid

  6. A Taxonomy of Asynchronous Instructional Video Styles

    Science.gov (United States)

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  7. Video as a Medium for Learning and Teaching

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Videos play an important role in today's digital era. According to Cisco®, video (business and consumer combined) was  59% of the total Internet traffic in 2014. Video is permeating our educational institutions, transforming the way we teach, learn, study, communicate and work (Kaltura Report 2015). But are videos always the best choice? In this lecture we examine the benefits of the use of video in learning as well as its limits.Tips on how to minimize those limits will be explained.Example short videos that demonstrate success (or not) stories will be shown.Finally, guidelines for making good videos for education will be given. NB! All Academic Training lectures are recorded but not webcasted. The recording will be linked from this event and the CDS Academic Training collection. Participation is free. No registration needed. Bio: Pedro de Freitas has realized a MSc in learning & teaching technologies and MSc in Psychology in the University of Geneva. His thesis subject ...

  8. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  9. The Learning Potential of Video Sketching

    DEFF Research Database (Denmark)

    Gundersen, Peter Bukovica; Ørngreen, Rikke; Hautopp, Heidi

    2017-01-01

    , designers across various disciplines have used sketching as an integrative part of their everyday practice, and sketching has proven to have a multitude of purposes in professional design. The purpose of this paper is to explore what happens when an extra layer of video recording is added during the early...... a new one or another is rejected. Also, video can make participants very and even too self-aware, though in explanatory and persuasive sessions, this may support participants to use more precise and explicit language. Based on these experiments, four different steps of collaborative video sketching have...... been identified: shaping, recording, viewing and editing. Combined with the different modes, these steps constitute the basis of our video sketching framework. This framework has been used as a tool for redesigning learning activities. It suggests new scenarios to include in future research using...

  10. New operator's console recorder

    International Nuclear Information System (INIS)

    Anon.

    2009-01-01

    This article described a software module that automatically records images being shown on multiple HMI or SCADA operator's displays. Videos used for monitoring activities at industrial plants can be combined with the operator console videos and data from a process historian. This enables engineers, analysts or investigators to see what is occurring in the plant, what the operator is seeing on the HMI screen, and all relevant real-time data from an event. In the case of a leak at a pumping station, investigators could watch plant video taken at a remote site showing fuel oil creeping across the floor, real-time data being acquired from pumps, valves and the receiving tank while the leak is occurring. The video shows the operator's HMI screen as well as the alarm screen that signifies the leak detection. The Longwatch Operator's Console Recorder and Video Historian are used together to acquire data about actual plant plant management because they show everything that happens during an event. The Console Recorder automatically retrieves and replays operator displays by clicking on a time-based alarm or system message. Play back of video feed is a valuable tool for training and analysis purposes, and can help mitigate insurance and regulatory issues by eliminating uncertainty and conjecture. 1 fig.

  11. Chaotic dynamics of respiratory sounds

    International Nuclear Information System (INIS)

    Ahlstrom, C.; Johansson, A.; Hult, P.; Ask, P.

    2006-01-01

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D 2 ) and the Kaplan-Yorke dimension (D KY ) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data

  12. Chaotic dynamics of respiratory sounds

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, C. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden) and Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)]. E-mail: christer@imt.liu.se; Johansson, A. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Hult, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden); Ask, P. [Department of Biomedical Engineering, Linkoepings Universitet, IMT/LIU, Universitetssjukhuset, S-58185 Linkoeping (Sweden); Biomedical Engineering, Orebro University Hospital, S-70185 Orebro (Sweden)

    2006-09-15

    There is a growing interest in nonlinear analysis of respiratory sounds (RS), but little has been done to justify the use of nonlinear tools on such data. The aim of this paper is to investigate the stationarity, linearity and chaotic dynamics of recorded RS. Two independent data sets from 8 + 8 healthy subjects were recorded and investigated. The first set consisted of lung sounds (LS) recorded with an electronic stethoscope and the other of tracheal sounds (TS) recorded with a contact accelerometer. Recurrence plot analysis revealed that both LS and TS are quasistationary, with the parts corresponding to inspiratory and expiratory flow plateaus being stationary. Surrogate data tests could not provide statistically sufficient evidence regarding the nonlinearity of the data. The null hypothesis could not be rejected in 4 out of 32 LS cases and in 15 out of 32 TS cases. However, the Lyapunov spectra, the correlation dimension (D {sub 2}) and the Kaplan-Yorke dimension (D {sub KY}) all indicate chaotic behavior. The Lyapunov analysis showed that the sum of the exponents was negative in all cases and that the largest exponent was found to be positive. The results are partly ambiguous, but provide some evidence of chaotic dynamics of RS, both concerning LS and TS. The results motivate continuous use of nonlinear tools for analysing RS data.

  13. A video for teaching english tenses

    Directory of Open Access Journals (Sweden)

    Frida Unsiah

    2017-04-01

    Students of English Language Education Program in Faculty of Cultural Studies Universitas Brawijaya ideally master Grammar before taking the degree of Sarjana Pendidikan. However, the fact shows that they are still weak in Grammar especially tenses. Therefore, the researchers initiate to develop a video as a media to teach tenses. Objectively, by using video, students get better understanding on tenses so that they can communicate using English accurately and contextually. To develop the video, the researchers used ADDIE model (Analysis, Design, Development, Implementation, Evaluation. First, the researchers analyzed the students’ learning need to determine the product that would be developed, in this case was a movie about English tenses. Then, the researchers developed a video as the product. The product then was validated by media expert who validated attractiveness, typography, audio, image, and usefulness and content expert and validated by a content expert who validated the language aspects and tenses of English used by the actors in the video dealing with the grammar content, pronunciation, and fluency performed by the actors. The result of validation shows that the video developed was considered good. Theoretically, it is appropriate to be used English Grammar classes. However, the media expert suggests that it still needs some improvement for the next development especially dealing with the synchronization between lips movement and sound on the scenes while the content expert suggests that the Grammar content of the video should focus on one tense only to provide more detailed concept of the tense.

  14. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  15. The sound of arousal in music is context-dependent.

    Science.gov (United States)

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  16. Detection of goal events in soccer videos

    Science.gov (United States)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  17. Using video-based observation research methods in primary care health encounters to evaluate complex interactions.

    Science.gov (United States)

    Asan, Onur; Montague, Enid

    2014-01-01

    The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. We reviewed studies in the literature which used video methods in health care research, and we also used our own experience based on the video studies we conducted in primary care settings. This paper highlighted the benefits of using video techniques, such as multi-channel recording and video coding, and compared "unmanned" video recording with the traditional observation method in primary care research. We proposed a list that can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles, researchers should anticipate when using video recording methods in future studies. With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilised as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches.

  18. Reviews Website: Online Graphing Calculator Video Clip: Learning From the News Phone App: Graphing Calculator Book: Challenge and Change: A History of the Nuffield A-Level Physics Project Book: SEP Sound Book: Reinventing Schools, Reforming Teaching Book: Physics and Technology for Future Presidents iPhone App: iSeismometer Web Watch

    Science.gov (United States)

    2011-01-01

    WE RECOMMEND Online Graphing Calculator Calculator plots online graphs Challenge and Change: A History of the Nuffield A-Level Physics Project Book delves deep into the history of Nuffield physics SEP Sound Booklet has ideas for teaching sound but lacks some basics Reinventing Schools, Reforming Teaching Fascinating book shows how politics impacts on the classroom Physics and Technology for Future Presidents A great book for teaching physics for the modern world iSeismometer iPhone app teaches students about seismic waves WORTH A LOOK Teachers TV Video Clip Lesson plan uses video clip to explore new galaxies Graphing Calculator App A phone app that handles formulae and graphs WEB WATCH Physics.org competition finds the best websites

  19. Relating pressure measurements to phenomena observed in high speed video recordings during tests of explosive charges in a semi-confined blast chamber

    CSIR Research Space (South Africa)

    Mostert, FJ

    2012-09-01

    Full Text Available initiation of the charge. It was observed in the video recordings that the detonation product cloud exhibited pulsating behaviour due to the reflected shocks in the chamber analogous to the behaviour of the gas bubble in underwater explosions. This behaviour...

  20. Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users

    Science.gov (United States)

    Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal

    2013-01-01

    Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119

  1. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  2. Video elicitation interviews: a qualitative research method for investigating physician-patient interactions.

    Science.gov (United States)

    Henry, Stephen G; Fetters, Michael D

    2012-01-01

    We describe the concept and method of video elicitation interviews and provide practical guidance for primary care researchers who want to use this qualitative method to investigate physician-patient interactions. During video elicitation interviews, researchers interview patients or physicians about a recent clinical interaction using a video recording of that interaction as an elicitation tool. Video elicitation is useful because it allows researchers to integrate data about the content of physician-patient interactions gained from video recordings with data about participants' associated thoughts, beliefs, and emotions gained from elicitation interviews. This method also facilitates investigation of specific events or moments during interactions. Video elicitation interviews are logistically demanding and time consuming, and they should be reserved for research questions that cannot be fully addressed using either standard interviews or video recordings in isolation. As many components of primary care fall into this category, high-quality video elicitation interviews can be an important method for understanding and improving physician-patient interactions in primary care.

  3. Video Elicitation Interviews: A Qualitative Research Method for Investigating Physician-Patient Interactions

    Science.gov (United States)

    Henry, Stephen G.; Fetters, Michael D.

    2012-01-01

    We describe the concept and method of video elicitation interviews and provide practical guidance for primary care researchers who want to use this qualitative method to investigate physician-patient interactions. During video elicitation interviews, researchers interview patients or physicians about a recent clinical interaction using a video recording of that interaction as an elicitation tool. Video elicitation is useful because it allows researchers to integrate data about the content of physician-patient interactions gained from video recordings with data about participants’ associated thoughts, beliefs, and emotions gained from elicitation interviews. This method also facilitates investigation of specific events or moments during interactions. Video elicitation interviews are logistically demanding and time consuming, and they should be reserved for research questions that cannot be fully addressed using either standard interviews or video recordings in isolation. As many components of primary care fall into this category, high-quality video elicitation interviews can be an important method for understanding and improving physician-patient interactions in primary care. PMID:22412003

  4. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    Science.gov (United States)

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  5. Analyzing the Pattern of L1 Sounds on L2 Sounds Produced by Javanese Students of Stkip PGRI Jombang

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2

  6. ANALYZING THE PATTERN OF L1 SOUNDS ON L2 SOUNDS PRODUCED BY JAVANESE STUDENTS OF STKIP PGRI JOMBANG

    Directory of Open Access Journals (Sweden)

    Daning Hentasmaka

    2015-07-01

    Full Text Available The studyconcerns on an analysis on the tendency of first language (L1 sound patterning on second language (L2 sounds done by Javanese students.Focusing on the consonant sounds, the data were collected by recording students’ pronunciationof English words during the pronunciation test. The data then analysed through three activities: data reduction, data display, and conclusion drawing/ verification. Theresult showedthatthe patterning of L1 sounds happened on L2 sounds especially on eleven consonant sounds: the fricatives [v, θ, ð, ʃ, ʒ], the voiceless stops [p, t, k], and the voiced stops [b, d, g].Thosepatterning case emergedmostlyduetothe difference in the existence of consonant sounds and rules of consonant distribution. Besides, one of the cases was caused by the difference in consonant clusters between L1 and L2.

  7. First year midwifery students' experience with self-recorded and assessed video of selected midwifery practice skills at Otago Polytechnic in New Zealand.

    Science.gov (United States)

    McIntosh, Carolyn; Patterson, Jean; Miller, Suzanne

    2018-01-01

    Studying undergraduate midwifery at a distance has advantages in terms of accessibility and community support but presents challenges for practice based competence assessment. Student -recorded videos provide opportunities for completing the assigned skills, self-reflection, and assessment by a lecturer. This research asked how midwifery students experienced the process of completing the Video Assessment of Midwifery Practice Skills (VAMPS) in 2014 and 2015. The aim of the survey was to identify the benefits and challenges of the VAMPS assessment and to identify opportunities for improvement from the students' perspective. All students who had participated in the VAMPS assessment during 2014 and 2015 were invited to complete an online survey. To maintain confidentiality for the students, the Qualtrics survey was administered and the data downloaded by the Organisational Research Officer. Ethical approval was granted by the organisational ethics committee. Descriptive statistics were generated and students' comments were collated. The VAMPS provided an accessible option for the competence assessment and the opportunity for self-reflection and re-recording to perfect their skill which the students appreciated. The main challenges related to the technical aspects of recording and uploading the assessment. This study highlighted some of the benefits and challenges experienced by the midwifery students and showed that practice skills can be successfully assessed at distance. The additional benefit of accessibility afforded by video assessment is a new and unique finding for undergraduate midwifery education and may resonate with other educators seeking ways to assess similar skill sets with cohorts of students studying at distance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Feasibility of an electronic stethoscope system for monitoring neonatal bowel sounds.

    Science.gov (United States)

    Dumas, Jasmine; Hill, Krista M; Adrezin, Ronald S; Alba, Jorge; Curry, Raquel; Campagna, Eric; Fernandes, Cecilia; Lamba, Vineet; Eisenfeld, Leonard

    2013-09-01

    Bowel dysfunction remains a major problem in neonates. Traditional auscultation of bowel sounds as a diagnostic aid in neonatal gastrointestinal complications is limited by skill and inability to document and reassess. Consequently, we built a unique prototype to investigate the feasibility of an electronic monitoring system for continuous assessment of bowel sounds. We attained approval by the Institutional Review Boards for the investigational study to test our system. The system incorporated a prototype stethoscope head with a built-in microphone connected to a digital recorder. Recordings made over extended periods were evaluated for quality. We also considered the acoustic environment of the hospital, where the stethoscope was used. The stethoscope head was attached to the abdomen with a hydrogel patch designed especially for this purpose. We used the system to obtain recordings from eight healthy, full-term babies. A scoring system was used to determine loudness, clarity, and ease of recognition comparing it to the traditional stethoscope. The recording duration was initially two hours and was increased to a maximum of eight hours. Median duration of attachment was three hours (3.75, 2.68). Based on the scoring, the bowel sound recording was perceived to be as loud and clear in sound reproduction as a traditional stethoscope. We determined that room noise and other noises were significant forms of interference in the recordings, which at times prevented analysis. However, no sound quality drift was noted in the recordings and no patient discomfort was noted. Minimal erythema was observed over the fixation site which subsided within one hour. We demonstrated the long-term recording of infant bowel sounds. Our contributions included a prototype stethoscope head, which was affixed using a specially designed hydrogel adhesive patch. Such a recording can be reviewed and reassessed, which is new technology and an improvement over current practice. The use of this

  9. Low complexity video encoding for UAV inspection

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Zhang, Ruo; Forchhammer, Søren

    2016-01-01

    In this work we present several methods for fast integer motion estimation of videos recorded aboard an Unmanned Aerial Vehicle (UAV). Different from related work, the field depth is not considered to be consistent. The novel methods designed for low complexity MV prediction in H.264/AVC and anal......In this work we present several methods for fast integer motion estimation of videos recorded aboard an Unmanned Aerial Vehicle (UAV). Different from related work, the field depth is not considered to be consistent. The novel methods designed for low complexity MV prediction in H.264/AVC...... for UAV infrared (IR) video are also provided....

  10. Make your own video with ActivePresenter

    CERN Document Server

    CERN. Geneva

    2016-01-01

    A step-by-step video tutorial on how to use ActivePresenter, a screen recording tool for Windows and Mac. The installation step is not needed for CERN users, as the product is already made available. This tutorial explains how to install ActivePresenter, how to do a screen recording and edit a video using ActivePresenter and finally how to exports the end product. Tell us what you think about this or any other video in this category via e-learning.support at cern.ch All info about the CERN rapid e-learning project is linked from http://twiki.cern.ch/ELearning  

  11. Reliable assessment of general surgeons' non-technical skills based on video-recordings of patient simulated scenarios.

    Science.gov (United States)

    Spanager, Lene; Beier-Holgersen, Randi; Dieckmann, Peter; Konge, Lars; Rosenberg, Jacob; Oestergaard, Doris

    2013-11-01

    Nontechnical skills are essential for safe and efficient surgery. The aim of this study was to evaluate the reliability of an assessment tool for surgeons' nontechnical skills, Non-Technical Skills for Surgeons dk (NOTSSdk), and the effect of rater training. A 1-day course was conducted for 15 general surgeons in which they rated surgeons' nontechnical skills in 9 video recordings of scenarios simulating real intraoperative situations. Data were gathered from 2 sessions separated by a 4-hour training session. Interrater reliability was high for both pretraining ratings (Cronbach's α = .97) and posttraining ratings (Cronbach's α = .98). There was no statistically significant development in assessment skills. The D study showed that 2 untrained raters or 1 trained rater was needed to obtain generalizability coefficients >.80. The high pretraining interrater reliability indicates that videos were easy to rate and Non-Technical Skills for Surgeons dk easy to use. This implies that Non-Technical Skills for Surgeons dk (NOTSSdk) could be an important tool in surgical training, potentially improving safety and quality for surgical patients. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  13. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  14. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  15. Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter

    Science.gov (United States)

    Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.

    2017-04-01

    The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.

  16. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  17. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  18. [Telemedicine with digital video transport system].

    Science.gov (United States)

    Hahm, Joon Soo; Shimizu, Shuji; Nakashima, Naoki; Byun, Tae Jun; Lee, Hang Lak; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Sun Il; Kim, Tae Eun; Yun, Jiwon; Park, Yong Jin

    2004-06-01

    The growth of technology based on internet protocol has affected on the informatics and automatic controls of medical fields. The aim of this study was to establish the telemedical educational system by developing the high quality image transfer using the DVTS (digital video transmission system) on the high-speed internet network. Using telemedicine, we were able to send surgical images not only to domestic areas but also to international area. Moreover, we could discuss the condition of surgical procedures in the operation room and seminar room. The Korean-Japan cable network (KJCN) was structured in the submarine between Busan and Fukuoka. On the other hand, the Korea advanced research network (KOREN) was used to connect between Busan and Seoul. To link the image between the Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan, we started teleconference system and recorded image-streaming system with DVTS on the circumstance with IPv4 network. Two operative cases were transmitted successfully. We could keep enough bandwidth of 60 Mbps for two-line transmission. The quality of transmitted moving image had no frame loss with the rate 30 per second. The sound was also clear and the time delay was less than 0.3 sec. Our study has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over internet protocol. It is easy to perform, reliable, and also economical. Thus, it will be a promising tool in remote medicine for worldwide telemedical communication in the future.

  19. Mapping (and modeling) physiological movements during EEG-fMRI recordings: the added value of the video acquired simultaneously.

    Science.gov (United States)

    Ruggieri, Andrea; Vaudano, Anna Elisabetta; Benuzzi, Francesca; Serafini, Marco; Gessaroli, Giuliana; Farinelli, Valentina; Nichelli, Paolo Frigio; Meletti, Stefano

    2015-01-15

    During resting-state EEG-fMRI studies in epilepsy, patients' spontaneous head-face movements occur frequently. We tested the usefulness of synchronous video recording to identify and model the fMRI changes associated with non-epileptic movements to improve sensitivity and specificity of fMRI maps related to interictal epileptiform discharges (IED). Categorization of different facial/cranial movements during EEG-fMRI was obtained for 38 patients [with benign epilepsy with centro-temporal spikes (BECTS, n=16); with idiopathic generalized epilepsy (IGE, n=17); focal symptomatic/cryptogenic epilepsy (n=5)]. We compared at single subject- and at group-level the IED-related fMRI maps obtained with and without additional regressors related to spontaneous movements. As secondary aim, we considered facial movements as events of interest to test the usefulness of video information to obtain fMRI maps of the following face movements: swallowing, mouth-tongue movements, and blinking. Video information substantially improved the identification and classification of the artifacts with respect to the EEG observation alone (mean gain of 28 events per exam). Inclusion of physiological activities as additional regressors in the GLM model demonstrated an increased Z-score and number of voxels of the global maxima and/or new BOLD clusters in around three quarters of the patients. Video-related fMRI maps for swallowing, mouth-tongue movements, and blinking were comparable to the ones obtained in previous task-based fMRI studies. Video acquisition during EEG-fMRI is a useful source of information. Modeling physiological movements in EEG-fMRI studies for epilepsy will lead to more informative IED-related fMRI maps in different epileptic conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. The NASA Sounding Rocket Program and space sciences

    Science.gov (United States)

    Gurkin, L. W.

    1992-01-01

    High altitude suborbital rockets (sounding rockets) have been extensively used for space science research in the post-World War II period; the NASA Sounding Rocket Program has been on-going since the inception of the Agency and supports all space science disciplines. In recent years, sounding rockets have been utilized to provide a low gravity environment for materials processing research, particularly in the commercial sector. Sounding rockets offer unique features as a low gravity flight platform. Quick response and low cost combine to provide more frequent spaceflight opportunities. Suborbital spacecraft design practice has achieved a high level of sophistication which optimizes the limited available flight times. High data-rate telemetry, real-time ground up-link command and down-link video data are routinely used in sounding rocket payloads. Standard, off-the-shelf, active control systems are available which limit payload body rates such that the gravitational environment remains less than 10(-4) g during the control period. Operational launch vehicles are available which can provide up to 7 minutes of experiment time for experiment weights up to 270 kg. Standard payload recovery systems allow soft impact retrieval of payloads. When launched from White Sands Missile Range, New Mexico, payloads can be retrieved and returned to the launch site within hours.

  1. Hardly that kind of girl? : on female representations in mainstream pop music videos

    OpenAIRE

    Hansen, Kai Arne

    2011-01-01

    Music video is a particularly powerful medium for showcasing pop artists, offering up a site where images and sounds come together to shape alluring representations. This thesis explores a selection of mainstream pop videos from a poststructuralist perspective, linking the representations of selected female artists to notions of gendered identity, sexuality, and ethnicity. As technological advancements open up new representational opportunities, current trends seem to showcase the female pop ...

  2. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  3. Reliability of Alberta Infant Motor Scale Using Recorded Video Observations Among the Preterm Infants in India: A Reliability Study

    Directory of Open Access Journals (Sweden)

    Veena Kirthika S

    2017-10-01

    Full Text Available Background: Assessment of motor function is a vital characteristic of infant development. Alberta Infant Motor scale (AIMS is considered to be one of the tool available for screening the developmental delays, but this scale was formulated by using western samples. Every country has its own ethnic and cultural background and various differences are observed in the culture and ethnicity. Therefore, there is a need to obtain reliability for the use of AIMS in south Indian population. Purpose: To find the intra-rater and inter-rater reliability of Alberta Infant Motor Scale (AIMS on pre-term infants using the recorded video observations in Indian population. Method: 30 preterm infants in three age groups, 0-3 months (10 infants, 4-7 months (10 infants, 8-18 months (10 infants were recruited for this reliability study. The AIMS was administered to the preterm infants and the performance was videotaped. The performance was then rescored by the same therapist, immediately from the video and on another two consecutive months to estimate intra-rater reliability using ICC (3,1, two-way mixed effects model. For reporting inter-rater reliability, AIMS was scored by three different raters, using ICC (2,k two-way random effects model and by two other therapists to examine the inter and intra-rater reliability. Results: The two-way mixed effects model for intra-rater reliability of AIMS, ICC (3,1 = 0.99 and for reporting inter-rater reliability of AIMS by two-way random effects model, ICC (2,k = 0.96. Conclusion: AIMS has excellent intra and inter-rater reliability using recorded video observations among the preterm infants in India

  4. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  5. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  6. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  7. A Green Soundscape Index (GSI): The potential of assessing the perceived balance between natural sound and traffic noise.

    Science.gov (United States)

    Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno

    2018-06-13

    Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B

  8. Abnormal eating behavior in video-recorded meals in anorexia nervosa.

    Science.gov (United States)

    Gianini, Loren; Liu, Ying; Wang, Yuanjia; Attia, Evelyn; Walsh, B Timothy; Steinglass, Joanna

    2015-12-01

    Eating behavior during meals in anorexia nervosa (AN) has long been noted to be abnormal, but little research has been done carefully characterizing these behaviors. These eating behaviors have been considered pathological, but are not well understood. The current study sought to quantify ingestive and non-ingestive behaviors during a laboratory lunch meal, compare them to the behaviors of healthy controls (HC), and examine their relationships with caloric intake and anxiety during the meal. A standardized lunch meal was video-recorded for 26 individuals with AN and 10 HC. Duration, frequency, and latency of 16 mealtime behaviors were coded using computer software. Caloric intake, dietary energy density (DEDS), and anxiety were also measured. Nine mealtime behaviors were identified that distinguished AN from HC: staring at food, tearing food, nibbling/picking, dissecting food, napkin use, inappropriate utensil use, hand fidgeting, eating latency, and nibbling/picking latency. Among AN, a subset of these behaviors was related to caloric intake and anxiety. These data demonstrate that the mealtime behaviors of patients with AN and HC differ significantly, and some of these behaviors may be associated with food intake and anxiety. These mealtime behaviors may be important treatment targets to improve eating behavior in individuals with AN. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. THE SOUND OF CINEMA: TECHNOLOGY AND CREATIVITY

    Directory of Open Access Journals (Sweden)

    Poznin Vitaly F.

    2017-12-01

    Full Text Available Technology is a means of creating any product. However, in the onscreen art, it is one of the elements creating the art space of film. Considering the main stages of the development of cinematography, this article explores the influence of technology of sound recording on the creating a special artistic and physical space of film (the beginning of the use a sound in movies; the mastering the artistic means of an audiovisual work; the expansion of the spatial characteristics for the screen sound; and the sound in a modern cinema. Today, thanks to new technologies, the sound in a cinema forms a specific quasirealistic landscape, greatly enhancing the impact on the viewer of the virtual screen images.

  10. 47 CFR 76.1710 - Operator interests in video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Operator interests in video programming. 76....1710 Operator interests in video programming. (a) Cable operators are required to maintain records in... interests in all video programming services as well as information regarding their carriage of such...

  11. Detecting the temporal structure of sound sequences in newborn infants

    NARCIS (Netherlands)

    Háden, G.P.; Honing, H.; Török, M.; Winkler, I.

    2015-01-01

    Most high-level auditory functions require one to detect the onset and offset of sound sequences as well as registering the rate at which sounds are presented within the sound trains. By recording event-related brain potentials to onsets and offsets of tone trains as well as to changes in the

  12. Video-Stimulated Accounts: Young Children Accounting for Interactional Matters in Front of Peers

    Science.gov (United States)

    Theobald, Maryanne

    2012-01-01

    Research in the early years places increasing importance on participatory methods to engage children. The playback of video-recording to stimulate conversation is a research method that enables children's accounts to be heard and attends to a participatory view. During video-stimulated sessions, participants watch an extract of video-recording of…

  13. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  14. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  15. 76 FR 59963 - Closed Captioning of Internet Protocol-Delivered Video Programming: Implementation of the Twenty...

    Science.gov (United States)

    2011-09-28

    ... words, captions may identify speakers, sound effects, music, and laughter.\\23\\ \\19\\ See Closed... captioning increased somewhat, through the voluntary efforts of the video programming industry.\\26\\ As the...

  16. Filtering the Unknown: Speech Activity Detection in Heterogeneous Video Collections

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; Wooters, Chuck; Ordelman, Roeland J.F.

    2007-01-01

    In this paper we discuss the speech activity detection system that we used for detecting speech regions in the Dutch TRECVID video collection. The system is designed to filter non-speech like music or sound effects out of the signal without the use of predefined non-speech models. Because the system

  17. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  18. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  19. Acoustic analysis of swallowing sounds: a new technique for assessing dysphagia.

    Science.gov (United States)

    Santamato, Andrea; Panza, Francesco; Solfrizzi, Vincenzo; Russo, Anna; Frisardi, Vincenza; Megna, Marisa; Ranieri, Maurizio; Fiore, Pietro

    2009-07-01

    To perform acoustic analysis of swallowing sounds, using a microphone and a notebook computer system, in healthy subjects and patients with dysphagia affected by neurological diseases, testing the positive/negative predictive value of a pathological pattern of swallowing sounds for penetration/aspiration. Diagnostic test study, prospective, not blinded, with the penetration/aspiration evaluated by fibreoptic endoscopy of swallowing as criterion standard. Data from a previously recorded database of normal swallowing sounds for 60 healthy subjects according to gender, age, and bolus consistency was compared with those of 15 patients with dysphagia from a university hospital referral centre who were affected by various neurological diseases. Mean duration of the swallowing sounds and post-swallowing apnoea were recorded. Penetration/aspiration was verified by fibreoptic endoscopy of swallowing in all patients with dysphagia. The mean duration of swallowing sounds for a liquid bolus of 10 ml water was significantly different between patients with dysphagia and healthy patients. We also described patterns of swallowing sounds and tested the negative/positive predictive values of post-swallowing apnoea for penetration/aspiration verified by fibreoptic endoscopy of swallowing (sensitivity 0.67 (95% confidence interval 0.24-0.94); specificity 1.00 (95% confidence interval 0.56-1.00)). The proposed technique for recording and measuring swallowing sounds could be incorporated into the bedside evaluation, but it should not replace the use of more diagnostic and valuable measures.

  20. Characterizing popularity dynamics of online videos

    Science.gov (United States)

    Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao

    2016-07-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.

  1. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  2. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  3. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  4. Video capture on student-owned mobile devices to facilitate psychomotor skills acquisition: A feasibility study.

    Science.gov (United States)

    Hinck, Glori; Bergmann, Thomas F

    2013-01-01

    Objective : We evaluated the feasibility of using mobile device technology to allow students to record their own psychomotor skills so that these recordings can be used for self-reflection and formative evaluation. Methods : Students were given the choice of using DVD recorders, zip drive video capture equipment, or their personal mobile phone, device, or digital camera to record specific psychomotor skills. During the last week of the term, they were asked to complete a 9-question survey regarding their recording experience, including details of mobile phone ownership, technology preferences, technical difficulties, and satisfaction with the recording experience and video critique process. Results : Of those completing the survey, 83% currently owned a mobile phone with video capability. Of the mobile phone owners 62% reported having email capability on their phone and that they could transfer their video recording successfully to their computer, making it available for upload to the learning management system. Viewing the video recording of the psychomotor skill was valuable to 88% of respondents. Conclusions : Our results suggest that mobile phones are a viable technology to use for the video capture and critique of psychomotor skills, as most students own this technology and their satisfaction with this method is high.

  5. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  6. Fractal measures of video-recorded trajectories can classify motor subtypes in Parkinson's Disease

    Science.gov (United States)

    Figueiredo, Thiago C.; Vivas, Jamile; Peña, Norberto; Miranda, José G. V.

    2016-11-01

    Parkinson's Disease is one of the most prevalent neurodegenerative diseases in the world and affects millions of individuals worldwide. The clinical criteria for classification of motor subtypes in Parkinson's Disease are subjective and may be misleading when symptoms are not clearly identifiable. A video recording protocol was used to measure hand tremor of 14 individuals with Parkinson's Disease and 7 healthy subjects. A method for motor subtype classification was proposed based on the spectral distribution of the movement and compared with the existing clinical criteria. Box-counting dimension and Hurst Exponent calculated from the trajectories were used as the relevant measures for the statistical tests. The classification based on the power-spectrum is shown to be well suited to separate patients with and without tremor from healthy subjects and could provide clinicians with a tool to aid in the diagnosis of patients in an early stage of the disease.

  7. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    DEFF Research Database (Denmark)

    Nordahl, Rolf

    2010-01-01

    We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users’ actions, while soundscapes reproduce the characteristic soundmarks...... as well as self-induced interactive sounds simulated using physical models. Results show that subjects’ motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment....

  8. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  9. Sound Synthesis of Objects Swinging through Air Using Physical Models

    Directory of Open Access Journals (Sweden)

    Rod Selfridge

    2017-11-01

    Full Text Available A real-time physically-derived sound synthesis model is presented that replicates the sounds generated as an object swings through the air. Equations obtained from fluid dynamics are used to determine the sounds generated while exposing practical parameters for a user or game engine to vary. Listening tests reveal that for the majority of objects modelled, participants rated the sounds from our model as plausible as actual recordings. The sword sound effect performed worse than others, and it is speculated that one cause may be linked to the difference between expectations of a sound and the actual sound for a given object.

  10. Vocal Noise Cancellation From Respiratory Sounds

    National Research Council Canada - National Science Library

    Moussavi, Zahra

    2001-01-01

    Although background noise cancellation for speech or electrocardiographic recording is well established, however when the background noise contains vocal noises and the main signal is a breath sound...

  11. A comparison between flexible electrogoniometers, inclinometers and three-dimensional video analysis system for recording neck movement.

    Science.gov (United States)

    Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil

    2013-11-01

    This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. The Use of Videos in Teaching - Some Experiences From the University of Copenhagen

    Directory of Open Access Journals (Sweden)

    Henrik Bregnhøj

    2016-11-01

    Full Text Available This paper covers videos created and used in different learning patterns. The videos are grouped according to the teaching or learning activities in which they are used. One group of videos are used by the teacher for one-way communication, including: online lectures, experts interacting with one another, instruction videos and introduction videos. Further videos are teacher-student interactive videos, including: feedback on student deliveries, student productions and interactive videos. Examples from different courses at different faculties at The University of Copenhagen of different types of videos (screencasts, pencasts and different kinds of camera recordings, from quick-and-dirty videos made by teachers at their own computer to professionally produced studio recordings as well as audio files are presented with links, as an empirical basis for the discussion. The paper is very practically oriented and looks at e.g. which course design and teaching situation is suitable for which type of video; at which point is an audio file preferable to a video file; and how to produce videos easily and without specialized equipment, if you don’t have access to (or time for professional assistance. In the article, we also point out how a small amount of tips & tricks regarding planning, design and presentation technique can improve recordings made by teachers themselves. We argue that the way to work with audio and video is to start by analyzing the pedagogical needs, in this way adapting the type and use of audio and video to the pedagogical context.

  13. On-Board Video Recording Unravels Bird Behavior and Mortality Produced by High-Speed Trains

    Directory of Open Access Journals (Sweden)

    Eladio L. García de la Morena

    2017-10-01

    Full Text Available Large high-speed railway (HSR networks are planned for the near future to accomplish increased transport demand with low energy consumption. However, high-speed trains produce unknown avian mortality due to birds using the railway and being unable to avoid approaching trains. Safety and logistic difficulties have precluded until now mortality estimation in railways through carcass removal, but information technologies can overcome such problems. We present the results obtained with an experimental on-board system to record bird-train collisions composed by a frontal recording camera, a GPS navigation system and a data storage unit. An observer standing in the cabin behind the driver controlled the system and filled out a form with data of collisions and bird observations in front of the train. Photographs of the train front taken before and after each journey were used to improve the record of killed birds. Trains running the 321.7 km line between Madrid and Albacete (Spain at speeds up to 250–300 km/h were equipped with the system during 66 journeys along a year, totaling approximately 14,700 km of effective recording. The review of videos produced 1,090 bird observations, 29.4% of them corresponding to birds crossing the infrastructure under the catenary and thus facing collision risk. Recordings also showed that 37.7% bird crossings were of animals resting on some element of the infrastructure moments before the train arrival, and that the flight initiation distance of birds (mean ± SD was between 60 ± 33 m (passerines and 136 ± 49 m (raptors. Mortality in the railway was estimated to be 60.5 birds/km year on a line section with 53 runs per day and 26.1 birds/km year in a section with 25 runs per day. Our results are the first published estimation of bird mortality in a HSR and show the potential of information technologies to yield useful data for monitoring the impact of trains on birds via on-board recording systems. Moreover

  14. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  15. A simplified 2D to 3D video conversion technology——taking virtual campus video production as an example

    Directory of Open Access Journals (Sweden)

    ZHUANG Huiyang

    2012-10-01

    Full Text Available This paper describes a simplified 2D to 3D Video Conversion Technology, taking virtual campus 3D video production as an example. First, it clarifies the meaning of the 2D to 3D Video Conversion Technology, and points out the disadvantages of traditional methods. Second, it forms an innovative and convenient method. A flow diagram, software and hardware configurations are presented. Finally, detailed description of the conversion steps and precautions are given in turn to the three processes, namely, preparing materials, modeling objects and baking landscapes, recording screen and converting videos .

  16. Frequency shifting approach towards textual transcription of heartbeat sounds.

    Science.gov (United States)

    Arvin, Farshad; Doraisamy, Shyamala; Safar Khorasani, Ehsan

    2011-10-04

    Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  17. Frequency shifting approach towards textual transcription of heartbeat sounds

    Directory of Open Access Journals (Sweden)

    Safar Khorasani Ehsan

    2011-10-01

    Full Text Available Abstract Auscultation is an approach for diagnosing many cardiovascular problems. Automatic analysis of heartbeat sounds and extraction of its audio features can assist physicians towards diagnosing diseases. Textual transcription allows recording a continuous heart sound stream using a text format which can be stored in very small memory in comparison with other audio formats. In addition, a text-based data allows applying indexing and searching techniques to access to the critical events. Hence, the transcribed heartbeat sounds provides useful information to monitor the behavior of a patient for the long duration of time. This paper proposes a frequency shifting method in order to improve the performance of the transcription. The main objective of this study is to transfer the heartbeat sounds to the music domain. The proposed technique is tested with 100 samples which were recorded from different heart diseases categories. The observed results show that, the proposed shifting method significantly improves the performance of the transcription.

  18. V-Cinema : canons of Japanese film and the challenge of video

    NARCIS (Netherlands)

    Mes, T.P.

    2018-01-01

    The cinema of Japan has long played a central role in the study of film. But its well-established canon of master directors and classic films obscures as much as it reveals. A case in point is video, arguably the most transformative technological change in cinema since the introduction of sound

  19. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  20. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Directory of Open Access Journals (Sweden)

    Alan Kevin Bourke

    2017-03-01

    Full Text Available Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps, both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86. A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  1. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  2. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  3. Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction

    Science.gov (United States)

    Miller, Robert E. (Robin)

    2005-04-01

    In acoustic spaces that are played as extensions of musical instruments, tonality is a major contributor to the experience of reality. Tonality is described as a process of integration in our consciousness over the reverberation time of the room of many sonic arrivals in three dimensions, each directionally coded in a learned response by the listeners unique head-related transfer function (HRTF). Preserving this complex 3D directionality is key to lifelike reproduction of a recording. Conventional techniques such as stereo or 5.1-channel surround sound position the listener at the apex of a triangle or the center of a circle, not the center of the sphere of lifelike hearing. A periphonic reproduction system for music and movie entertainment, Virtual Reality, and Training Simulation termed PerAmbio 3D/2D (Pat. pending) is described in theory and subjective tests that capture the 3D sound field with a microphone array and transform the periphonic signals into ordinary 6-channel media for either decoderless 2D replay on 5.1 systems, or lossless 3D replay with decoder and five additional speakers. PerAmbio 3D/2D is described as a practical approach to preserving the spatial perception of reality, where the listening room and speakers disappear, leaving the acoustical impression of the original venue.

  4. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  5. Urban Noise Recorded by Stationary Monitoring Stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek; Dekýš, Vladimir

    2017-10-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads in the town in the direction of Łódź and Lublin. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The agreement of the distribution of the variable under analysis with normal distribution was evaluated. It was demonstrated that in most cases (for both roads) there was sufficient evidence to reject the null hypothesis at the significance level of 0.05. It was noted that compared with Łódź Road, in the case of Lublin Road data, more cases were recorded for which the null hypothesis could not be rejected. Uncertainties of the equivalent sound level measurements were compared within the periods under analysis. The standard deviation, coefficient of variation, the positional coefficient of variation, the quartile deviation was proposed for performing a comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes and time intervals. The differences concerned the values of uncertainties and coefficients of variation of the equivalent sound levels.

  6. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  7. Business Plan for a Record Company

    OpenAIRE

    Mbuthia, Alexander; Wakuwile, Janina

    2013-01-01

    The objective of this thesis is to develop a business plan for a record company named Kamoja Records in Espoo Finland that will focus on music and video production. The main purpose of this study is to determine whether this business plan is viable and whether the resulting company would be able to function as a vibrant record label. The business plan evaluates different features that are related to music and video production. The purpose is to obtain knowledge about business planning in gene...

  8. "Oh Such a Good Sound": A Case for a Macrocosmic Aesthetic of Grace in ASMR

    OpenAIRE

    Philip Rice

    2016-01-01

    Autonomous Sensory Meridian Response (ASMR) is a pseudoscientific neologism for a pleasurable tingling sensation reported by a growing number of people in response to soft sounds such as whispers, crinkles, and taps. ASMR is described as a “headgasm” where tingles start in the scalp or neck and radiate throughout the body. An active community of enthusiasts has emerged on YouTube in support of videos crafted by “ASMRtists.” Characteristics of the videos include a strong sense of personal...

  9. Spectral analysis of bowel sounds in intestinal obstruction using an electronic stethoscope.

    Science.gov (United States)

    Ching, Siok Siong; Tan, Yih Kai

    2012-09-07

    To determine the value of bowel sounds analysis using an electronic stethoscope to support a clinical diagnosis of intestinal obstruction. Subjects were patients who presented with a diagnosis of possible intestinal obstruction based on symptoms, signs, and radiological findings. A 3M™ Littmann(®) Model 4100 electronic stethoscope was used in this study. With the patients lying supine, six 8-second recordings of bowel sounds were taken from each patient from the lower abdomen. The recordings were analysed for sound duration, sound-to-sound interval, dominant frequency, and peak frequency. Clinical and radiological data were reviewed and the patients were classified as having either acute, subacute, or no bowel obstruction. Comparison of bowel sound characteristics was made between these subgroups of patients. In the presence of an obstruction, the site of obstruction was identified and bowel calibre was also measured to correlate with bowel sounds. A total of 71 patients were studied during the period July 2009 to January 2011. Forty patients had acute bowel obstruction (27 small bowel obstruction and 13 large bowel obstruction), 11 had subacute bowel obstruction (eight in the small bowel and three in large bowel) and 20 had no bowel obstruction (diagnoses of other conditions were made). Twenty-five patients received surgical intervention (35.2%) during the same admission for acute abdominal conditions. A total of 426 recordings were made and 420 recordings were used for analysis. There was no significant difference in sound-to-sound interval, dominant frequency, and peak frequency among patients with acute bowel obstruction, subacute bowel obstruction, and no bowel obstruction. In acute large bowel obstruction, the sound duration was significantly longer (median 0.81 s vs 0.55 s, P = 0.021) and the dominant frequency was significantly higher (median 440 Hz vs 288 Hz, P = 0.003) when compared to acute small bowel obstruction. No significant difference was seen

  10. A sound worth saving: acoustic characteristics of a massive fish spawning aggregation.

    Science.gov (United States)

    Erisman, Brad E; Rowell, Timothy J

    2017-12-01

    Group choruses of marine animals can produce extraordinarily loud sounds that markedly elevate levels of the ambient soundscape. We investigated sound production in the Gulf corvina ( Cynoscion othonopterus ), a soniferous marine fish with a unique reproductive behaviour threatened by overfishing, to compare with sounds produced by other marine animals. We coupled echosounder and hydrophone surveys to estimate the magnitude of the aggregation and sounds produced during spawning. We characterized individual calls and documented changes in the soundscape generated by the presence of as many as 1.5 million corvina within a spawning aggregation spanning distances up to 27 km. We show that calls by male corvina represent the loudest sounds recorded in a marine fish, and the spatio-temporal magnitude of their collective choruses are among the loudest animal sounds recorded in aquatic environments. While this wildlife spectacle is at great risk of disappearing due to overfishing, regional conservation efforts are focused on other endangered marine animals. © 2017 The Author(s).

  11. The Sound of 1-bit: Technical constraint and musical creativity on the 48k Sinclair ZX Spectrum

    Directory of Open Access Journals (Sweden)

    Kenneth B. McAlpine

    2017-12-01

    Full Text Available This article explores constraint as a driver of creativity and innovation in early video game soundtracks. Using what was, perhaps, the most constrained platform of all, the 48k Sinclair ZX Spectrum, as a prism through which to examine the development of an early branch of video game music, the paper explores the creative approaches adopted by programmers to circumvent the Spectrum’s technical limitations so as to coax the hardware into performing feats of musicality that it had never been designed to achieve. These solutions were not without computational or aural cost, however, and their application often imparted a unique characteristic to the sound, which over time came to define the aesthetic of the 8-bit computer soundtrack, a sound which has been developed since as part of the emerging chiptune scene. By discussing pivotal moments in the development of ZX Spectrum music, this article will show how the application of binary impulse trains, granular synthesis, and pulse-width modulation came to shape the sound of 1-bit music.

  12. Copyright and Related Issues Relevant to Digital Preservation and Dissemination of Unpublished Pre-1972 Sound Recordings by Libraries and Archives. CLIR Publication No. 144

    Science.gov (United States)

    Besek, June M.

    2009-01-01

    This report addresses the question of what libraries and archives are legally empowered to do to preserve and make accessible for research their holdings of unpublished pre-1972 sound recordings. The report's author, June M. Besek, is executive director of the Kernochan Center for Law, Media and the Arts at Columbia Law School. Unpublished sound…

  13. Replacing the Orchestra? - The Discernibility of Sample Library and Live Orchestra Sounds.

    Directory of Open Access Journals (Sweden)

    Reinhard Kopiez

    Full Text Available Recently, musical sounds from pre-recorded orchestra sample libraries (OSL have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602 on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.

  14. Developing a reference of normal lung sounds in healthy Peruvian children.

    Science.gov (United States)

    Ellington, Laura E; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H; Tielsch, James M; Chavez, Miguel A; Marin-Concha, Julio; Figueroa, Dante; West, James; Checkley, William

    2014-10-01

    Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81%) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47% were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments.

  15. Deficient multisensory integration in schizophrenia: an event-related potential study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  16. The influence of meaning on the perception of speech sounds.

    Science.gov (United States)

    Kazanina, Nina; Phillips, Colin; Idsardi, William

    2006-07-25

    As part of knowledge of language, an adult speaker possesses information on which sounds are used in the language and on the distribution of these sounds in a multidimensional acoustic space. However, a speaker must know not only the sound categories of his language but also the functional significance of these categories, in particular, which sound contrasts are relevant for storing words in memory and which sound contrasts are not. Using magnetoencephalographic brain recordings with speakers of Russian and Korean, we demonstrate that a speaker's perceptual space, as reflected in early auditory brain responses, is shaped not only by bottom-up analysis of the distribution of sounds in his language but also by more abstract analysis of the functional significance of those sounds.

  17. Video Observations, Atmospheric Path, Orbit and Fragmentation Record of the Fall of the Peekskill Meteorite

    Science.gov (United States)

    Ceplecha, Z.; Brown, P.; Hawkes, R. L.; Wertherill, G.; Beech, M.; Mossman, K.

    1996-02-01

    Large Near-Earth-Asteroids have played a role in modifying the character of the surface geology of the Earth over long time scales through impacts. Recent modeling of the disruption of large meteoroids during atmospheric flight has emphasized the dramatic effects that smaller objects may also have on the Earth's surface. However, comparison of these models with observations has not been possible until now. Peekskill is only the fourth meteorite to have been recovered for which detailed and precise data exist on the meteoroid atmospheric trajectory and orbit. Consequently, there are few constraints on the position of meteorites in the solar system before impact on Earth. In this paper, the preliminary analysis based on 4 from all 15 video recordings of the fireball of October 9, 1992 which resulted in the fall of a 12.4 kg ordinary chondrite (H6 monomict breccia) in Peekskill, New York, will be given. Preliminary computations revealed that the Peekskill fireball was an Earth-grazing event, the third such case with precise data available. The body with an initial mass of the order of 104 kg was in a pre-collision orbit with a = 1.5 AU, an aphelion of slightly over 2 AU and an inclination of 5‡. The no-atmosphere geocentric trajectory would have lead to a perigee of 22 km above the Earth's surface, but the body never reached this point due to tremendous fragmentation and other forms of ablation. The dark flight of the recovered meteorite started from a height of 30 km, when the velocity dropped below 3 km/s, and the body continued 50 km more without ablation, until it hit a parked car in Peekskill, New York with a velocity of about 80 m/s. Our observations are the first video records of a bright fireball and the first motion pictures of a fireball with an associated meteorite fall.

  18. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  19. US market. Sound below the line

    Energy Technology Data Exchange (ETDEWEB)

    Iken, Joern

    2012-07-01

    The American Wind Energy Association AWEA is publishing warnings almost daily. The lack of political support is endangering jobs. The year 2011 broke no records, but there was a sound plus in expansion figures. (orig.)

  20. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  1. Method for measuring violin sound radiation based on bowed glissandi and its application to sound synthesis.

    Science.gov (United States)

    Perez Carrillo, Alfonso; Bonada, Jordi; Patynen, Jukka; Valimaki, Vesa

    2011-08-01

    This work presents a method for measuring and computing violin-body directional frequency responses, which are used for violin sound synthesis. The approach is based on a frame-weighted deconvolution of excitation and response signals. The excitation, consisting of bowed glissandi, is measured with piezoelectric transducers built into the bridge. Radiation responses are recorded in an anechoic chamber with multiple microphones placed at different angles around the violin. The proposed deconvolution algorithm computes impulse responses that, when convolved with any source signal (captured with the same transducer), produce a highly realistic violin sound very similar to that of a microphone recording. The use of motion sensors allows for tracking violin movements. Combining this information with the directional responses and using a dynamic convolution algorithm, helps to improve the listening experience by incorporating the violinist motion effect in stereo.

  2. A Peer-Reviewed Instructional Video is as Effective as a Standard Recorded Didactic Lecture in Medical Trainees Performing Chest Tube Insertion: A Randomized Control Trial.

    Science.gov (United States)

    Saun, Tomas J; Odorizzi, Scott; Yeung, Celine; Johnson, Marjorie; Bandiera, Glen; Dev, Shelly P

    Online medical education resources are becoming an increasingly used modality and many studies have demonstrated their efficacy in procedural instruction. This study sought to determine whether a standardized online procedural video is as effective as a standard recorded didactic teaching session for chest tube insertion. A randomized control trial was conducted. Participants were taught how to insert a chest tube with either a recorded didactic teaching session, or a New England Journal of Medicine (NEJM) video. Participants filled out a questionnaire before and after performing the procedure on a cadaver, which was filmed and assessed by 2 blinded evaluators using a standardized tool. Western University, London, Ontario. Level of clinical care: institutional. A total of 30 fourth-year medical students from 2 graduating classes at the Schulich School of Medicine & Dentistry were screened for eligibility. Two students did not complete the study and were excluded. There were 13 students in the NEJM group, and 15 students in the didactic group. The NEJM group׳s average score was 45.2% (±9.56) on the prequestionnaire, 67.7% (±12.9) for the procedure, and 60.1% (±7.65) on the postquestionnaire. The didactic group׳s average score was 42.8% (±10.9) on the prequestionnaire, 73.7% (±9.90) for the procedure, and 46.5% (±7.46) on the postquestionnaire. There was no difference between the groups on the prequestionnaire (Δ + 2.4%; 95% CI: -5.16 to 9.99), or the procedure (Δ -6.0%; 95% CI: -14.6 to 2.65). The NEJM group had better scores on the postquestionnaire (Δ + 11.15%; 95% CI: 3.74-18.6). The NEJM video was as effective as video-recorded didactic training for teaching the knowledge and technical skills essential for chest tube insertion. Participants expressed high satisfaction with this modality. It may prove to be a helpful adjunct to standard instruction on the topic. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc

  3. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  4. Review of sound card photogates

    International Nuclear Information System (INIS)

    Gingl, Zoltan; Mingesz, Robert; Mellar, Janos; Makra, Peter

    2011-01-01

    Photogates are probably the most commonly used electronic instruments to aid experiments in the field of mechanics. Although they are offered by many manufacturers, they can be too expensive to be widely used in all classrooms, in multiple experiments or even at home experimentation. Today all computers have a sound card - an interface for analogue signals. It is possible to make very simple yet highly accurate photogates for cents, while much more sophisticated solutions are also available at a still very low cost. In our paper we show several experimentally tested ways of implementing sound card photogates in detail, and we also provide full-featured, free, open-source photogate software as a much more efficient experimentation tool than the usually used sound recording programs. Further information is provided on a dedicated web page, www.noise.physx.u-szeged.hu/edudev.

  5. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  6. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Developing a Reference of Normal Lung Sounds in Healthy Peruvian Children

    Science.gov (United States)

    Ellington, Laura E.; Emmanouilidou, Dimitra; Elhilali, Mounya; Gilman, Robert H.; Tielsch, James M.; Chavez, Miguel A.; Marin-Concha, Julio; Figueroa, Dante; West, James

    2018-01-01

    Purpose Lung auscultation has long been a standard of care for the diagnosis of respiratory diseases. Recent advances in electronic auscultation and signal processing have yet to find clinical acceptance; however, computerized lung sound analysis may be ideal for pediatric populations in settings, where skilled healthcare providers are commonly unavailable. We described features of normal lung sounds in young children using a novel signal processing approach to lay a foundation for identifying pathologic respiratory sounds. Methods 186 healthy children with normal pulmonary exams and without respiratory complaints were enrolled at a tertiary care hospital in Lima, Peru. Lung sounds were recorded at eight thoracic sites using a digital stethoscope. 151 (81 %) of the recordings were eligible for further analysis. Heavy-crying segments were automatically rejected and features extracted from spectral and temporal signal representations contributed to profiling of lung sounds. Results Mean age, height, and weight among study participants were 2.2 years (SD 1.4), 84.7 cm (SD 13.2), and 12.0 kg (SD 3.6), respectively; and, 47 % were boys. We identified ten distinct spectral and spectro-temporal signal parameters and most demonstrated linear relationships with age, height, and weight, while no differences with genders were noted. Older children had a faster decaying spectrum than younger ones. Features like spectral peak width, lower-frequency Mel-frequency cepstral coefficients, and spectro-temporal modulations also showed variations with recording site. Conclusions Lung sound extracted features varied significantly with child characteristics and lung site. A comparison with adult studies revealed differences in the extracted features for children. While sound-reduction techniques will improve analysis, we offer a novel, reproducible tool for sound analysis in real-world environments. PMID:24943262

  8. From different angles : Exploring and applying the design potential of video

    NARCIS (Netherlands)

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record

  9. Making Sense of Video Analytics: Lessons Learned from Clickstream Interactions, Attitudes, and Learning Outcome in a Video-Assisted Course

    Directory of Open Access Journals (Sweden)

    Michail N. Giannakos

    2015-02-01

    Full Text Available Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may provide insights into student learning performance and inform the improvement of video-assisted teaching tactics. Nevertheless, video analytics are not accessible to learning stakeholders, such as researchers and educators, mainly because online video platforms do not broadly share the interactions of the users with their systems. For this purpose, we have designed an open-access video analytics system for use in a video-assisted course. In this paper, we present a longitudinal study, which provides valuable insights through the lens of the collected video analytics. In particular, we found that there is a relationship between video navigation (repeated views and the level of cognition/thinking required for a specific video segment. Our results indicated that learning performance progress was slightly improved and stabilized after the third week of the video-assisted course. We also found that attitudes regarding easiness, usability, usefulness, and acceptance of this type of course remained at the same levels throughout the course. Finally, we triangulate analytics from diverse sources, discuss them, and provide the lessons learned for further development and refinement of video-assisted courses and practices.

  10. Video in Non-Formal Education: A Bibliographical Study.

    Science.gov (United States)

    Lewis, Peter M.

    Intended to inform United Nations member states about the application of electronic recording and replaying devices in the nonformal education domain, this bibliographic study surveys the literature on video. Since the study is meant to be of particular use to decision makers in developing countries, video projects in North America and Western…

  11. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  12. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  13. Video context-dependent recall.

    Science.gov (United States)

    Smith, Steven M; Manzano, Isabel

    2010-02-01

    In two experiments, we used an effective new method for experimentally manipulating local and global contexts to examine context-dependent recall. The method included video-recorded scenes of real environments, with target words superimposed over the scenes. In Experiment 1, we used a within-subjects manipulation of video contexts and compared the effects of reinstatement of a global context (15 words per context) with effects of less overloaded context cues (1 and 3 words per context) on recall. The size of the reinstatement effects in Experiment 1 show how potently video contexts can cue recall. A strong effect of cue overload was also found; reinstatement effects were smaller, but still quite robust, in the 15 words per context condition. The powerful reinstatement effect was replicated for local contexts in Experiment 2, which included a no-contexts-reinstated group, a control condition used to determine whether reinstatement of half of the cues caused biased output interference for uncued targets. The video context method is a potent way to investigate context-dependent memory.

  14. Sounds of Modified Flight Feathers Reliably Signal Danger in a Pigeon.

    Science.gov (United States)

    Murray, Trevor G; Zeil, Jochen; Magrath, Robert D

    2017-11-20

    In his book on sexual selection, Darwin [1] devoted equal space to non-vocal and vocal communication in birds. Since then, vocal communication has become a model for studies of neurobiology, learning, communication, evolution, and conservation [2, 3]. In contrast, non-vocal "instrumental music," as Darwin called it, has only recently become subject to sustained inquiry [4, 5]. In particular, outstanding work reveals how feathers, often highly modified, produce distinctive sounds [6-9], and suggests that these sounds have evolved at least 70 times, in many orders [10]. It remains to be shown, however, that such sounds are signals used in communication. Here we show that crested pigeons (Ochyphaps lophotes) signal alarm with specially modified wing feathers. We used video and feather-removal experiments to demonstrate that the highly modified 8 th primary wing feather (P8) produces a distinct note during each downstroke. The sound changes with wingbeat frequency, so that birds fleeing danger produce wing sounds with a higher tempo. Critically, a playback experiment revealed that only if P8 is present does the sound of escape flight signal danger. Our results therefore indicate, nearly 150 years after Darwin's book, that modified feathers can be used for non-vocal communication, and they reveal an intrinsically reliable alarm signal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  16. 75 FR 16377 - Digital Performance Right in Sound Recordings and Ephemeral Recordings

    Science.gov (United States)

    2010-04-01

    ...). Petitions to Participate were received from: Intercollegiate Broadcast System, Inc./ Harvard Radio...), respectively, and the references to January 1, 2009, have been deleted. Next, for the reasons stated above in... State. (j) Retention of records. Books and records of a Broadcaster and of the Collective relating to...

  17. A new video studio for CERN

    CERN Multimedia

    Anaïs Vernede

    2011-01-01

    On Monday, 14 February 2011 CERN's new video studio was inaugurated with a recording of "Spotlight on CERN", featuring an interview with the DG, Rolf Heuer.   CERN's new video studio. Almost all international organisations have a studio for their audiovisual communications, and now it's CERN’s turn to acquire such a facility. “In the past, we've made videos using the Globe audiovisual facilities and sometimes using the small photographic studio, which is equipped with simple temporary sets that aren’t really suitable for video,” explains Jacques Fichet, head of CERN‘s audiovisual service. Once the decision had been taken to create the new 100 square-metre video studio, the work took only five months to complete. The studio, located in Building 510, is equipped with a cyclorama (a continuous smooth white wall used as a background) measuring 3 m in height and 16 m in length, as well as a teleprompter, a rail-mounted camera dolly fo...

  18. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  19. Alternative measures to observe and record vocal fold vibrations

    NARCIS (Netherlands)

    Schutte, HK; McCafferty, G; Coman, W; Carroll, R

    1996-01-01

    Vocal fold vibration patterns form the basis for the production of vocal sound. Over the years much effort has been spend to optimize the ways to visualize and give a description of these patterns. Before video possibilities became available the description of the patterns was Very time-consuming.

  20. An unsupervised method for summarizing egocentric sport videos

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  1. 75 FR 14074 - Digital Performance Right in Sound Recordings and Ephemeral Recordings for a New Subscription...

    Science.gov (United States)

    2010-03-24

    ...). The additions to Sec. 383.3 read as follows: Sec. 383.3 Royalty fees for public performances of sound... Sec. 383.4 to read as follows: Sec. 383.4 Terms for making payment of royalty fees. (a) Terms in... payments to the Collective, late fees, statements of account, audit and verification of royalty payments...

  2. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  3. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  4. Specially Designed Sound-Boxes Used by Students to Perform School-Lab Sensor–Based Experiments, to Understand Sound Phenomena

    Directory of Open Access Journals (Sweden)

    Stefanos Parskeuopoulos

    2011-02-01

    Full Text Available The research presented herein investigates and records students’ perceptions relating to sound phenomena and their improvement during a specialised laboratory practice utilizing ICT and a simple experimental apparatus, especially designed for teaching. This school-lab apparatus and its operation are also described herein. A number of 71 first and second grade Vocational-school students, aged 16 to 20, participated in the research. These were divided into groups of 4-5 students, each of which worked for 6 hours in order to complete all activities assigned. Data collection was carried out through personal interviews as well as questionnaires which were distributed before and after the instructive intervention. The results shows that students’ active involvement with the simple teaching apparatus, through which the effects of sound waves are visible, helps them comprehend sound phenomena. It also altered considerably their initial misconceptions about sound propagation. The results are presented diagrammatically herein, while some important observations are made, relating to the teaching and learning of scientific concepts concerning sound.

  5. Digital video applications in radiologic education: theory, technique, and applications.

    Science.gov (United States)

    Hennessey, J G; Fishman, E K; Ney, D R

    1994-05-01

    Computer-assisted instruction (CAI) has great potential in medical education. The recent explosion of multimedia platforms provides an environment for the seemless integration of text, images, and sound into a single program. This article discusses the role of digital video in the current educational environment as well as its future potential. An indepth review of the technical decisions of this new technology is also presented.

  6. 12 CFR 1732.7 - Record hold.

    Science.gov (United States)

    2010-01-01

    ... Banking OFFICE OF FEDERAL HOUSING ENTERPRISE OVERSIGHT, DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT SAFETY AND SOUNDNESS RECORD RETENTION Record Retention Program § 1732.7 Record hold. (a) Definition. For... Enterprise or OFHEO that the Enterprise is to retain records relating to a particular issue in connection...

  7. Characterizing popularity dynamics of online videos

    OpenAIRE

    Ren, Zhuo-Ming; Shi, , Yu-Qiang; Liao, Hao

    2016-01-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span...

  8. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  9. Video Content Search System for Better Students Engagement in the Learning Process

    Directory of Open Access Journals (Sweden)

    Alanoud Alotaibi

    2014-12-01

    Full Text Available As a component of the e-learning educational process, content plays an essential role. Increasingly, the video-recorded lectures in e-learning systems are becoming more important to learners. In most cases, a single video-recorded lecture contains more than one topic or sub-topic. Therefore, to enable learners to find the desired topic and reduce learning time, e-learning systems need to provide a search capability for searching within the video content. This can be accomplished by enabling learners to identify the video or portion that contains a keyword they are looking for. This research aims to develop Video Content Search system to facilitate searching in educational videos and its contents. Preliminary results of an experimentation were conducted on a selected university course. All students needed a system to avoid time-wasting problem of watching long videos with no significant benefit. The statistics showed that the number of learners increased during the experiment. Future work will include studying impact of VCS system on students’ performance and satisfaction.

  10. Wearable Eating Habit Sensing System Using Internal Body Sound

    Science.gov (United States)

    Shuzo, Masaki; Komori, Shintaro; Takashima, Tomoko; Lopez, Guillaume; Tatsuta, Seiji; Yanagimoto, Shintaro; Warisawa, Shin'ichi; Delaunay, Jean-Jacques; Yamada, Ichiro

    Continuous monitoring of eating habits could be useful in preventing lifestyle diseases such as metabolic syndrome. Conventional methods consist of self-reporting and calculating mastication frequency based on the myoelectric potential of the masseter muscle. Both these methods are significant burdens for the user. We developed a non-invasive, wearable sensing system that can record eating habits over a long period of time in daily life. Our sensing system is composed of two bone conduction microphones placed in the ears that send internal body sound data to a portable IC recorder. Applying frequency spectrum analysis on the collected sound data, we could not only count the number of mastications during eating, but also accurately differentiate between eating, drinking, and speaking activities. This information can be used to evaluate the regularity of meals. Moreover, we were able to analyze sound features to classify the types of foods eaten by food texture.

  11. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  12. Variability of road traffic noise recorded by stationary monitoring stations

    Science.gov (United States)

    Bąkowski, Andrzej; Radziszewski, Leszek

    2017-11-01

    The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads out of the town in the direction of Kraków and Warszawa. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The coefficient of variation and positional variation were proposed for performing comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes. The differences concerned the values of coefficients of variation of the equivalent sound levels.

  13. Efficient Temporal Action Localization in Videos

    KAUST Repository

    Alwassel, Humam

    2018-04-17

    State-of-the-art temporal action detectors inefficiently search the entire video for specific actions. Despite the encouraging progress these methods achieve, it is crucial to design automated approaches that only explore parts of the video which are the most relevant to the actions being searched. To address this need, we propose the new problem of action spotting in videos, which we define as finding a specific action in a video while observing a small portion of that video. Inspired by the observation that humans are extremely efficient and accurate in spotting and finding action instances in a video, we propose Action Search, a novel Recurrent Neural Network approach that mimics the way humans spot actions. Moreover, to address the absence of data recording the behavior of human annotators, we put forward the Human Searches dataset, which compiles the search sequences employed by human annotators spotting actions in the AVA and THUMOS14 datasets. We consider temporal action localization as an application of the action spotting problem. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently (observing on average 17.3% of the video) but it also accurately finds human activities with 30.8% mAP (0.5 tIoU), outperforming state-of-the-art methods

  14. Use of active video games to increase physical activity in children: a (virtual) reality?

    Science.gov (United States)

    Foley, Louise; Maddison, Ralph

    2010-02-01

    There has been increased research interest in the use of active video games (in which players physically interact with images onscreen) as a means to promote physical activity in children. The aim of this review was to assess active video games as a means of increasing energy expenditure and physical activity behavior in children. Studies were obtained from computerized searches of multiple electronic bibliographic databases. The last search was conducted in December 2008. Eleven studies focused on the quantification of the energy cost associated with playing active video games, and eight studies focused on the utility of active video games as an intervention to increase physical activity in children. Compared with traditional nonactive video games, active video games elicited greater energy expenditure, which was similar in intensity to mild to moderate intensity physical activity. The intervention studies indicate that active video games may have the potential to increase free-living physical activity and improve body composition in children; however, methodological limitations prevent definitive conclusions. Future research should focus on larger, methodologically sound intervention trials to provide definitive answers as to whether this technology is effective in promoting long-term physical activity in children.

  15. Understanding ‘human’ waves: exploiting the physics in a viral video

    Science.gov (United States)

    Ferrer-Roca, Chantal

    2018-01-01

    Waves are a relevant part of physics that students find difficult to grasp, even in those cases in which wave propagation kinematics can be visualized. This may hinder a proper understanding of sound, light or quantum physics phenomena that are explained using a wave model. So-called ‘human’ waves, choreographed by people, have proved to be an advisable way to understand basic wave concepts. Videos are widely used as a teaching resource and can be of considerable help in order to watch and discuss ‘human’ waves provided their quality is reasonably good. In this paper we propose and analyse a video that went viral online and has been revealed to be a useful teaching resource for introductory physics students. It shows a unique and very complete series of wave propagations, including pulses with different polarizations and periodic waves that can hardly be found elsewhere. After a proposal on how to discuss the video qualitatively, a quantitative analysis is carried out (no video-tracker needed), including a determination of the main wave magnitudes such as period, wavelength and propagation speed.

  16. Celebrity endorsed music videos: innovation to foster youth health promotion.

    Science.gov (United States)

    Macnab, A J; Mukisa, R

    2018-06-11

    There are calls for innovation in health promotion and for current issues to be presented in new and exciting ways; in addition to creating engaging messages, novel ways to deliver health messaging are needed, especially where youth are the key target audience. When pupils in WHO Health Promoting Schools were asked what health messages would resonate with them, they also identified celebrities as the 'messengers' they would be particularly likely to listen to. Expanding on these discussions, the pupils quoted celebrity-recorded music videos containing health and lifestyle messaging as an example of where they had learned from celebrities. Their ability to sing phrases from the songs and repeat key health messages they contained indicated the videos had commanded attention and provided knowledge and perspectives that had been retained. We located on YouTube the video titles the pupils identified and evaluated the content, messaging and production concepts these celebrity-recorded music videos incorporated. All are good examples of the health promotion genre known as education entertainment, where educational content is intentionally included in professionally produced entertainment media to impart knowledge, create favorable attitudes and impact future behaviors. The importance of this genre is growing in parallel with the burgeoning influence of social media. Music videos resonate with youth, and celebrity recordings combine young people's love of music with their fascination for the aura of celebrity. Hence, producing videos that combine an effective health message with celebrity endorsement offers potential as an innovative conduit for health promotion messaging among youth.

  17. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae: first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    Directory of Open Access Journals (Sweden)

    Kéver Loïc

    2012-12-01

    Full Text Available Abstract Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms, and never exceeded a few dozen milliseconds (18 ± 11 ms. Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of

  18. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  19. James Weldon Johnson and the Speech Lab Recordings

    Directory of Open Access Journals (Sweden)

    Chris Mustazza

    2016-03-01

    Full Text Available On December 24, 1935, James Weldon Johnson read thirteen of his poems at Columbia University, in a recording session engineered by Columbia Professor of Speech George W. Hibbitt and Barnard colleague Professor W. Cabell Greet, pioneers in the field that became sociolinguistics. Interested in American dialects, Greet and Hibbitt used early sound recording technologies to preserve dialect samples. In the same lab where they recorded T.S. Eliot, Gertrude Stein, and others, James Weldon Johnson read a selection of poems that included several from his seminal collection God’s Trombones and some dialect poems. Mustazza has digitized these and made them publicly available in the PennSound archive. In this essay, Mustazza contextualizes the collection, considering the recordings as sonic inscriptions alongside their textual manifestations. He argues that the collection must be heard within the frames of its production conditions—especially its recording in a speech lab—and that the sound recordings are essential elements in an hermeneutic analysis of the poems. He reasons that the poems’ original topics are reframed and refocused when historicized and contextualized within the frame of The Speech Lab Recordings.

  20. The Art and Science of Acoustic Recording: Re-enacting Arthur Nikisch and the Berlin Philharmonic Orchestra’s landmark 1913 recording of Beethoven’s Fifth Symphony

    Directory of Open Access Journals (Sweden)

    Dr Aleks Kolkowski

    2015-05-01

    Full Text Available The Art and Science of Acoustic Recording was a collaborative project between the Royal College of Music and the Science Museum that saw an historic orchestral recording from 1913 re-enacted by musicians, researchers and sound engineers at the Royal College of Music (RCM in 2014. The original recording was an early attempt to capture the sound of a large orchestra without re-scoring or substituting instruments and represents a step towards phonographic realism. Using replicated recording technology, media and techniques of the period, the re-enactment recorded two movements of Beethoven’s Fifth Symphony on to wax discs – the first orchestral acoustic recordings made since 1925. The aims were primarily to investigate the processes and practices of acoustic sound recording, developed largely through tacit knowledge, and to derive insights into the musicians’ experience of recording acoustically. Furthermore, the project sought to discover what the acoustic recordings of the past do – and don't – communicate to listeners today. Archival sources, historic apparatus and early photographic evidence served as groundwork for the re-enactment and guided its methodology, while the construction of replicas, wax manufacture and sound engineering were carried out by an expert in the field of acoustic recording. The wax recordings were digitised and some processed to produce disc copies playable on gramophone, thus replicating the entire course of recording, processing, duplication and reproduction. It is suggested that the project has contributed to a deeper understanding of early recordings and has provided a basis for further reconstructions of historical recording sessions.

  1. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  2. Magnetic Thin Films for Perpendicular Magnetic Recording Systems

    Science.gov (United States)

    Sugiyama, Atsushi; Hachisu, Takuma; Osaka, Tetsuya

    In the advanced information society of today, information storage technology, which helps to store a mass of electronic data and offers high-speed random access to the data, is indispensable. Against this background, hard disk drives (HDD), which are magnetic recording devices, have gained in importance because of their advantages in capacity, speed, reliability, and production cost. These days, the uses of HDD extend not only to personal computers and network servers but also to consumer electronics products such as personal video recorders, portable music players, car navigation systems, video games, video cameras, and personal digital assistances.

  3. Sound-induced facial synkinesis following facial nerve paralysis.

    Science.gov (United States)

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  4. Protocol Standards for Reporting Video Data in Academic Journals.

    Science.gov (United States)

    Rowland, Pamela A; Ignacio, Romeo C; de Moya, Marc A

    2016-04-01

    Editors of biomedical journals have estimated that a majority (40%-90%) of studies published in scientific journals cannot be replicated, even though an inherent principle of publication is that others should be able to replicate and build on published claims. Each journal sets its own protocols for establishing "quality" in articles, yet over the past 50 years, few journals in any field--especially medical education--have specified protocols for reporting the use of video data in research. The authors found that technical and industry-driven aspects of video recording, as well as a lack of standardization and reporting requirements by research journals, have led to major limitations in the ability to assess or reproduce video data used in research. Specific variables in the videotaping process (e.g., camera angle), which can be changed or be modified, affect the quality of recorded data, leading to major reporting errors and, in turn, unreliable conclusions. As more data are now in the form of digital videos, the historical lack of reporting standards makes it increasingly difficult to accurately replicate medical educational studies. Reproducibility is especially important as the medical education community considers setting national high-stakes standards in medicine and surgery based on video data. The authors of this Perspective provide basic protocol standards for investigators and journals using video data in research publications so as to allow for reproducibility.

  5. Do physiotherapy staff record treatment time accurately? An observational study.

    Science.gov (United States)

    Bagley, Pam; Hudson, Mary; Green, John; Forster, Anne; Young, John

    2009-09-01

    To assess the reliability of duration of treatment time measured by physiotherapy staff in early-stage stroke patients. Comparison of physiotherapy staff's recording of treatment sessions and video recording. Rehabilitation stroke unit in a general hospital. Thirty-nine stroke patients without trunk control or who were unable to stand with an erect trunk without the support of two therapists recruited to a randomized trial evaluating the Oswestry Standing Frame. Twenty-six physiotherapy staff who were involved in patient treatment. Contemporaneous recording by physiotherapy staff of treatment time (in minutes) compared with video recording. Intraclass correlation with 95% confidence interval and the Bland and Altman method for assessing agreement by calculating the mean difference (standard deviation; 95% confidence interval), reliability coefficient and 95% limits of agreement for the differences between the measurements. The mean duration (standard deviation, SD) of treatment time recorded by physiotherapy staff was 32 (11) minutes compared with 25 (9) minutes as evidenced in the video recording. The mean difference (SD) was -6 (9) minutes (95% confidence interval (CI) -9 to -3). The reliability coefficient was 18 minutes and the 95% limits of agreement were -24 to 12 minutes. Intraclass correlation coefficient for agreement between the two methods was 0.50 (95% CI 0.12 to 0.73). Physiotherapy staff's recording of duration of treatment time was not reliable and was systematically greater than the video recording.

  6. A Comparative Study of Video Presentation Modes in Relation to L2 Listening Success

    Science.gov (United States)

    Li, Chen-Hong

    2016-01-01

    Video comprehension involves interpreting both sounds and images. Research has shown that processing an aural text with relevant pictorial information effectively enhances second/foreign language (L2) listening comprehension. A hypothesis underlying this mixed-methods study is that a visual-only silent film used as an advance organiser to activate…

  7. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    Full Text Available Terufumi Shimoda,1 Yasushi Obase,2 Yukio Nagasaka,3 Hiroshi Nakano,1 Akiko Ishimatsu,1 Reiko Kishikawa,1 Tomoaki Iwanaga1 1Clinical Research Center, Fukuoka National Hospital, Fukuoka, 2Second Department of Internal Medicine, School of Medicine, Nagasaki University, Nagasaki, 3Kyoto Respiratory Center, Otowa Hospital, Kyoto, Japan Purpose: Airway inflammation can be detected by lung sound analysis (LSA at a single point in the posterior lower lung field. We performed LSA at 7 points to examine whether the technique could identify the location of airway inflammation in patients with asthma. Patients and methods: Breath sounds were recorded at 7 points on the body surface of 22 asthmatic subjects. Inspiration sound pressure level (ISPL, expiration sound pressure level (ESPL, and the expiration-to-inspiration sound pressure ratio (E/I were calculated in 6 frequency bands. The data were analyzed for potential correlation with spirometry, airway hyperresponsiveness (PC20, and fractional exhaled nitric oxide (FeNO. Results: The E/I data in the frequency range of 100–400 Hz (E/I low frequency [LF], E/I mid frequency [MF] were better correlated with the spirometry, PC20, and FeNO values than were the ISPL or ESPL data. The left anterior chest and left posterior lower recording positions were associated with the best correlations (forced expiratory volume in 1 second/forced vital capacity: r=–0.55 and r=–0.58; logPC20: r=–0.46 and r=–0.45; and FeNO: r=0.42 and r=0.46, respectively. The majority of asthmatic subjects with FeNO ≥70 ppb exhibited high E/I MF levels in all lung fields (excluding the trachea and V50%pred <80%, suggesting inflammation throughout the airway. Asthmatic subjects with FeNO <70 ppb showed high or low E/I MF levels depending on the recording position, indicating uneven airway inflammation. Conclusion: E/I LF and E/I MF are more useful LSA parameters for evaluating airway inflammation in bronchial asthma; 7-point lung

  8. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  9. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  10. On the Sediment Dynamics in a Tidally Energetic Channel: The Inner Sound, Northern Scotland

    Directory of Open Access Journals (Sweden)

    Jason McIlvenny

    2016-04-01

    Full Text Available Sediment banks within a fast-flowing tidal channel, the Inner Sound in the Pentland Firth, were mapped using multi-frequency side-scan sonar. This novel technique provides a new tool for seabed sediment and benthic habitat mapping. The sonar data are supplemented by sediment grab and ROV videos. The combined data provide detailed maps of persistent sand and shell banks present in the Sound despite the high energy environment. Acoustic Doppler Current Profiler (ADCP data and numerical model predictions were used to understand the hydrodynamics of the system. By combining the hydrodynamics and sediment distribution data, we explain the sediment dynamics in the area. Sediment particle shape and density, coupled with persistent features of the hydrodynamics, are the key factors in the distribution of sediment within the channel. Implications for tidal energy development planned for the Sound are discussed.

  11. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  12. Video content analysis of surgical procedures.

    Science.gov (United States)

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  13. Combined Amplification and Sound Generation for Tinnitus: A Scoping Review.

    Science.gov (United States)

    Tutaj, Lindsey; Hoare, Derek J; Sereda, Magdalena

    In most cases, tinnitus is accompanied by some degree of hearing loss. Current tinnitus management guidelines recognize the importance of addressing hearing difficulties, with hearing aids being a common option. Sound therapy is the preferred mode of audiological tinnitus management in many countries, including in the United Kingdom. Combination instruments provide a further option for those with an aidable hearing loss, as they combine amplification with a sound generation option. The aims of this scoping review were to catalog the existing body of evidence on combined amplification and sound generation for tinnitus and consider opportunities for further research or evidence synthesis. A scoping review is a rigorous way to identify and review an established body of knowledge in the field for suggestive but not definitive findings and gaps in current knowledge. A wide variety of databases were used to ensure that all relevant records within the scope of this review were captured, including gray literature, conference proceedings, dissertations and theses, and peer-reviewed articles. Data were gathered using scoping review methodology and consisted of the following steps: (1) identifying potentially relevant records; (2) selecting relevant records; (3) extracting data; and (4) collating, summarizing, and reporting results. Searches using 20 different databases covered peer-reviewed and gray literature and returned 5959 records. After exclusion of duplicates and works that were out of scope, 89 records remained for further analysis. A large number of records identified varied considerably in methodology, applied management programs, and type of devices. There were significant differences in practice between different countries and clinics regarding candidature and fitting of combination aids, partly driven by the application of different management programs. Further studies on the use and effects of combined amplification and sound generation for tinnitus are

  14. Fish4Knowledge collecting and analyzing massive coral reef fish video data

    CERN Document Server

    Chen-Burger, Yun-Heh; Giordano, Daniela; Hardman, Lynda; Lin, Fang-Pang

    2016-01-01

    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and tracking, fish species recognition and analysis, a large SQL database to record the results and an efficient retrieval mechanism. Novel user interface mechanisms were developed to provide easy access for marine ecologists, who wanted to explore the dataset. The book is a useful resource for system builders, as it gives an overview of the many new methods that were created to build the Fish4Knowledge system in a manner that also allows readers to see ho...

  15. The Perspective of Six Malaysian Students on Playing Video Games: Beneficial or Detrimental?

    Science.gov (United States)

    Baki, Roselan; Yee Leng, Eow; Wan Ali, Wan Zah; Mahmud, Rosnaini; Hamzah, Mohd. Sahandri Gani

    2008-01-01

    This study provides a glimpse into understanding the potential benefits as well as harm of playing video games from the perspective of six Malaysian secondary school students, aged 16-17 years old. The rationale of the study is to enable parents, educators, administrators and policy makers to develop a sound understanding on the impact of playing…

  16. SOUND VELOCITY and Other Data from USS STUMP DD-978) (NCEI Accession 9400069)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The sound velocity data in this accession were collected from USS STUMP DD-978 by US Navy. The sound velocity in water is analog profiles data that was recorded in...

  17. Fish assemblages associated with natural and anthropogenically-modified habitats in a marine embayment: comparison of baited videos and opera-house traps.

    Directory of Open Access Journals (Sweden)

    Corey B Wakefield

    Full Text Available Marine embayments and estuaries play an important role in the ecology and life history of many fish species. Cockburn Sound is one of a relative paucity of marine embayments on the west coast of Australia. Its sheltered waters and close proximity to a capital city have resulted in anthropogenic intrusion and extensive seascape modification. This study aimed to compare the sampling efficiencies of baited videos and fish traps in determining the relative abundance and diversity of temperate demersal fish species associated with naturally occurring (seagrass, limestone outcrops and soft sediment and modified (rockwall and dredge channel habitats in Cockburn Sound. Baited videos sampled a greater range of species in higher total and mean abundances than fish traps. This larger amount of data collected by baited videos allowed for greater discrimination of fish assemblages between habitats. The markedly higher diversity and abundances of fish associated with seagrass and limestone outcrops, and the fact that these habitats are very limited within Cockburn Sound, suggests they play an important role in the fish ecology of this embayment. Fish assemblages associated with modified habitats comprised a subset of species in lower abundances when compared to natural habitats with similar physical characteristics. This suggests modified habitats may not have provided the necessary resource requirements (e.g. shelter and/or diet for some species, resulting in alterations to the natural trophic structure and interspecific interactions. Baited videos provided a more efficient and non-extractive method for comparing fish assemblages and habitat associations of smaller bodied species and juveniles in a turbid environment.

  18. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  19. Physics and music the science of musical sound

    CERN Document Server

    White, Harvey E

    2014-01-01

    Comprehensive and accessible, this foundational text surveys general principles of sound, musical scales, characteristics of instruments, mechanical and electronic recording devices, and many other topics. More than 300 illustrations plus questions, problems, and projects.

  20. Lung function interpolation by analysis of means of neural-network-supported respiration sounds

    NARCIS (Netherlands)

    Oud, M

    Respiration sounds of individual asthmatic patients were analysed in the scope of the development of a method for computerised recognition of the degree of airways obstruction. Respiration sounds were recorded during laboratory sessions of allergen provoked airways obstruction, during several stages

  1. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  2. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    Science.gov (United States)

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  3. An automatic analyzer for sports video databases using visual cues and real-world modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.; Lao, Weilun

    2006-01-01

    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key

  4. Effects of sounds of locomotion on speech perception

    Directory of Open Access Journals (Sweden)

    Matz Larsson

    2015-01-01

    Full Text Available Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel and the target sound (speech were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal ("just follow conversation" or JFC level when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.

  5. Modern recording techniques

    CERN Document Server

    Huber, David Miles

    2013-01-01

    As the most popular and authoritative guide to recording Modern Recording Techniques provides everything you need to master the tools and day to day practice of music recording and production. From room acoustics and running a session to mic placement and designing a studio Modern Recording Techniques will give you a really good grounding in the theory and industry practice. Expanded to include the latest digital audio technology the 7th edition now includes sections on podcasting, new surround sound formats and HD and audio.If you are just starting out or looking for a step up

  6. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  7. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    Science.gov (United States)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  8. Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.

    Science.gov (United States)

    Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V

    2017-08-01

    Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664

  9. High-throughput phenotyping of plant resistance to aphids by automated video tracking.

    Science.gov (United States)

    Kloth, Karen J; Ten Broeke, Cindy Jm; Thoen, Manus Pm; Hanhart-van den Brink, Marianne; Wiegers, Gerrie L; Krips, Olga E; Noldus, Lucas Pjj; Dicke, Marcel; Jongsma, Maarten A

    2015-01-01

    Piercing-sucking insects are major vectors of plant viruses causing significant yield losses in crops. Functional genomics of plant resistance to these insects would greatly benefit from the availability of high-throughput, quantitative phenotyping methods. We have developed an automated video tracking platform that quantifies aphid feeding behaviour on leaf discs to assess the level of plant resistance. Through the analysis of aphid movement, the start and duration of plant penetrations by aphids were estimated. As a case study, video tracking confirmed the near-complete resistance of lettuce cultivar 'Corbana' against Nasonovia ribisnigri (Mosely), biotype Nr:0, and revealed quantitative resistance in Arabidopsis accession Co-2 against Myzus persicae (Sulzer). The video tracking platform was benchmarked against Electrical Penetration Graph (EPG) recordings and aphid population development assays. The use of leaf discs instead of intact plants reduced the intensity of the resistance effect in video tracking, but sufficiently replicated experiments resulted in similar conclusions as EPG recordings and aphid population assays. One video tracking platform could screen 100 samples in parallel. Automated video tracking can be used to screen large plant populations for resistance to aphids and other piercing-sucking insects.

  10. The Effects of Video Self-Modeling on the Decoding Skills of Children At Risk for Reading Disabilities

    OpenAIRE

    Ayala, Sandra M

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum instruction. Individual videos were recorded and edited to show students successfully and accurately decoding words and practicing sight word recognition. Each...

  11. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    Science.gov (United States)

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  12. Video-based self-review: comparing Google Glass and GoPro technologies.

    Science.gov (United States)

    Paro, John A M; Nazareli, Rahim; Gurjala, Anadev; Berger, Aaron; Lee, Gordon K

    2015-05-01

    Professionals in a variety of specialties use video-based review as a method of constant self-evaluation. We believe critical self-reflection will allow a surgical trainee to identify methods for improvement throughout residency and beyond. We have used 2 new popular technologies to evaluate their role in accomplishing the previously mentioned objectives. Our group investigated Google Glass and GoPro cameras. Medical students, residents, and faculty were invited to wear each of the devices during a scheduled operation. After the case, each participant was asked to comment on a number of features of the device including comfort, level of distraction/interference with operating, ease of video acquisition, and battery life. Software and hardware specifications were compiled and compared by the authors. A "proof-of-concept" was also performed using the video-conferencing abilities of Google Glass to perform a simulated flap check. The technical specifications of the 2 cameras favor GoPro over Google Glass. Glass records in 720p with 5-MP still shots, and the GoPro records in 1080p with 12-MP still shots. Our tests of battery life showed more than 2 hours of continuous video with GoPro, and less than 1 hour for Glass. Favorable features of Google Glass included comfort and relative ease of use; they could not comfortably wear loupes while operating, and would have preferred longer hands-free video recording. The GoPro was slightly more cumbersome and required a nonsterile team member to activate all pictures or video; however, loupes could be worn. Google Glass was successfully used in the hospital for a simulated flap check, with overall audio and video being transmitted--fine detail was lost, however. There are benefits and limitations to each of the devices tested. Google Glass is in its infancy and may gain a larger intraoperative role in the future. We plan to use Glass as a way for trainees to easily acquire intraoperative footage as a means to "review tape" and

  13. Assessing Caribbean Shallow and Mesophotic Reef Fish Communities Using Baited-Remote Underwater Video (BRUV) and Diver-Operated Video (DOV) Survey Techniques

    Science.gov (United States)

    Macaya-Solis, Consuelo; Exton, Dan A.; Gress, Erika; Wright, Georgina; Rogers, Alex D.

    2016-01-01

    Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30–150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth. PMID:27959907

  14. Video lottery: winning expectancies and arousal.

    Science.gov (United States)

    Ladouceur, Robert; Sévigny, Serge; Blaszczynski, Alexander; O'Connor, Kieron; Lavoie, Marc E

    2003-06-01

    This study investigates the effects of video lottery players' expectancies of winning on physiological and subjective arousal. Participants were assigned randomly to one of two experimental conditions: high and low winning expectancies. Participants played 100 video lottery games in a laboratory setting while physiological measures were recorded. Level of risk-taking was controlled. Participants were 34 occasional or regular video lottery players. They were assigned randomly into two groups of 17, with nine men and eight women in each group. The low-expectancy group played for fun, therefore expecting to win worthless credits, while the high-expectancy group played for real money. Players' experience, demographic variables and subjective arousal were assessed. Severity of problem gambling was measured with the South Oaks Gambling Screen. In order to measure arousal, the average heart rate was recorded across eight periods. Participants exposed to high as compared to low expectations experienced faster heart rate prior to and during the gambling session. According to self-reports, it is the expectancy of winning money that is exciting, not playing the game. Regardless of the level of risk-taking, expectancy of winning is a cognitive factor influencing levels of arousal. When playing for fun, gambling becomes significantly less stimulating than when playing for money.

  15. Quality of Experience Assessment of Video Quality in Social Clouds

    Directory of Open Access Journals (Sweden)

    Asif Ali Laghari

    2017-01-01

    Full Text Available Video sharing on social clouds is popular among the users around the world. High-Definition (HD videos have big file size so the storing in cloud storage and streaming of videos with high quality from cloud to the client are a big problem for service providers. Social clouds compress the videos to save storage and stream over slow networks to provide quality of service (QoS. Compression of video decreases the quality compared to original video and parameters are changed during the online play as well as after download. Degradation of video quality due to compression decreases the quality of experience (QoE level of end users. To assess the QoE of video compression, we conducted subjective (QoE experiments by uploading, sharing, and playing videos from social clouds. Three popular social clouds, Facebook, Tumblr, and Twitter, were selected to upload and play videos online for users. The QoE was recorded by using questionnaire given to users to provide their experience about the video quality they perceive. Results show that Facebook and Twitter compressed HD videos more as compared to other clouds. However, Facebook gives a better quality of compressed videos compared to Twitter. Therefore, users assigned low ratings for Twitter for online video quality compared to Tumblr that provided high-quality online play of videos with less compression.

  16. 75 FR 67777 - Copyright Office; Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972

    Science.gov (United States)

    2010-11-03

    ... (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., spoken, or other sounds, but not including the sounds accompanying a motion picture or other audiovisual... general, Federal law is better defined, both as to the rights and the exceptions, and more consistent than...

  17. The Effect of Music in Video Mediated Instruction on Student Achievement.

    Science.gov (United States)

    Talabi, J. K.

    1986-01-01

    Describes a study of secondary school students in Nigeria to determine whether use of musical accompaniment on videotape recordings used in instruction of economic geography had any effects on students' learning. Results offer inconclusive differences in effect between video instruction accompanied by music and video instruction without music.…

  18. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  19. Comparison between uroflowmetry and sonouroflowmetry in recording of urinary flow in healthy men.

    Science.gov (United States)

    Krhut, Jan; Gärtner, Marcel; Sýkora, Radek; Hurtík, Petr; Burda, Michal; Luňáček, Libor; Zvarová, Katarína; Zvara, Peter

    2015-08-01

    To evaluate the accuracy of sonouroflowmetry in recording urinary flow parameters and voided volume. A total of 25 healthy male volunteers (age 18-63 years) were included in the study. All participants were asked to carry out uroflowmetry synchronous with recording of the sound generated by the urine stream hitting the water level in the urine collection receptacle, using a dedicated cell phone. From 188 recordings, 34 were excluded, because of voided volume Pearson's correlation coefficient was used to compare parameters recorded by uroflowmetry with those calculated based on sonouroflowmetry recordings. The flow pattern recorded by sonouroflowmetry showed a good correlation with the uroflowmetry trace. A strong correlation (Pearson's correlation coefficient 0.87) was documented between uroflowmetry-recorded flow time and duration of the sound signal recorded with sonouroflowmetry. A moderate correlation was observed in voided volume (Pearson's correlation coefficient 0.68) and average flow rate (Pearson's correlation coefficient 0.57). A weak correlation (Pearson's correlation coefficient 0.38) between maximum flow rate recorded using uroflowmetry and sonouroflowmetry-recorded peak sound intensity was documented. The present study shows that the basic concept utilizing sound analysis for estimation of urinary flow parameters and voided volume is valid. However, further development of this technology and standardization of recording algorithm are required. © 2015 The Japanese Urological Association.

  20. Efficient Geometric Sound Propagation Using Visibility Culling

    Science.gov (United States)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying

  1. The Learning Potential of Video Sketching

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Henningsen, Birgitte; Gundersen, Peter Bukovica

    2017-01-01

    been identified: shaping, recording, viewing and editing. Combined with the different modes, these steps constitute the basis of our video sketching framework. This framework has been used as a tool for redesigning learning activities. It suggests new scenarios to include in future research using...

  2. Impact of Video Feedback on Teachers' Eye-Contact Mannerisms in Microteaching.

    Science.gov (United States)

    Karasar, Niyazi

    To test the impact of video feedback on teachers' eye-contact mannerisms in microteaching in inservice vocational teacher education, the study utilized video recordings from the data bank generated by previous studies conducted at the Ohio State University's Center for Vocational and Technical Education. The tapes were assigned through a…

  3. Records Management And Private Sector Organizations | Mnjama ...

    African Journals Online (AJOL)

    This article begins by examining the role of records management in private organizations. It identifies the major reason why organizations ought to manage their records effectively and efficiently. Its major emphasis is that a sound records management programme is a pre-requisite to quality management system programme ...

  4. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  5. 3D reconstruction of cystoscopy videos for comprehensive bladder records

    OpenAIRE

    Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2017-01-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize ...

  6. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  7. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  8. Sound quality assessment of wood for xylophone bars.

    Science.gov (United States)

    Aramaki, Mitsuko; Baillères, Henri; Brancheriau, Loïc; Kronland-Martinet, Richard; Ystad, Sølvi

    2007-04-01

    Xylophone sounds produced by striking wooden bars with a mallet are strongly influenced by the mechanical properties of the wood species chosen by the xylophone maker. In this paper, we address the relationship between the sound quality based on the timbre attribute of impacted wooden bars and the physical parameters characterizing wood species. For this, a methodology is proposed that associates an analysis-synthesis process and a perceptual classification test. Sounds generated by impacting 59 wooden bars of different species but with the same geometry were recorded and classified by a renowned instrument maker. The sounds were further digitally processed and adjusted to the same pitch before being once again classified. The processing is based on a physical model ensuring the main characteristics of the wood are preserved during the sound transformation. Statistical analysis of both classifications showed the influence of the pitch in the xylophone maker judgement and pointed out the importance of two timbre descriptors: the frequency-dependent damping and the spectral bandwidth. These descriptors are linked with physical and anatomical characteristics of wood species, providing new clues in the choice of attractive wood species from a musical point of view.

  9. Path length entropy analysis of diastolic heart sounds.

    Science.gov (United States)

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Video as a Metaphorical Eye: Images of Positionality, Pedagogy, and Practice

    Science.gov (United States)

    Hamilton, Erica R.

    2012-01-01

    Considered by many to be cost-effective and user-friendly, video technology is utilized in a multitude of contexts, including the university classroom. One purpose, although not often used, involves recording oneself teaching. This autoethnographic study focuses on the author's use of video and reflective practice in order to capture and examine…

  11. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    Science.gov (United States)

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  12. Non-Book Materials and Copyright.

    Science.gov (United States)

    McNally, Paul T.

    1978-01-01

    Reviews Australian copyright laws as they apply to photographs, slides, overhead transparencies, filmstrips, sound and video recordings, and films. Responsibilities of the library as user are discussed. (RAO)

  13. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks.

    Science.gov (United States)

    Hawkins, S C; Osborne, A; Schofield, S J; Pournaras, D J; Chester, J F

    2012-01-01

    Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.

  14. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  15. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    Science.gov (United States)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did

  16. Evaluating Environmental Sounds from a Presence Perspective for Virtual Reality Applications

    Directory of Open Access Journals (Sweden)

    Nordahl Rolf

    2010-01-01

    Full Text Available We propose a methodology to design and evaluate environmental sounds for virtual environments. We propose to combine physically modeled sound events with recorded soundscapes. Physical models are used to provide feedback to users' actions, while soundscapes reproduce the characteristic soundmarks of an environment. In this particular case, physical models are used to simulate the act of walking in the botanical garden of the city of Prague, while soundscapes are used to reproduce the particular sound of the garden. The auditory feedback designed was combined with a photorealistic reproduction of the same garden. A between-subject experiment was conducted, where 126 subjects participated, involving six different experimental conditions, including both uni- and bimodal stimuli (auditory and visual. The auditory stimuli consisted of several combinations of auditory feedback, including static sound sources as well as self-induced interactive sounds simulated using physical models. Results show that subjects' motion in the environment is significantly enhanced when dynamic sound sources and sound of egomotion are rendered in the environment.

  17. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  18. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    Science.gov (United States)

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association

  19. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  20. Mechanisms of Wing Beat Sound in Flapping Wings of Beetles

    Science.gov (United States)

    Allen, John

    2017-11-01

    While the aerodynamic aspects of insect flight have received recent attention, the mechanisms of sound production by flapping wings is not well understood. Though the harmonic structure of wing beat frequency modulation has been reported with respect to biological implications, few studies have rigorously quantified it with respect directionality, phase coupling and vortex tip scattering. Moreover, the acoustic detection and classification of invasive species is both of practical as well scientific interest. In this study, the acoustics of the tethered flight of the Coconut Rhinoceros Beetle (Oryctes rhinoceros) is investigated with four element microphone array in conjunction with complementary optical sensors and high speed video. The different experimental methods for wing beat determination are compared in both the time and frequency domain. Flow visualization is used to examine the vortex and sound generation due to the torsional mode of the wing rotation. Results are compared with related experimental studies of the Oriental Flower Beetle. USDA, State of Hawaii.

  1. Are binaural recordings needed for subjective and objective annoyance assessment of traffic noise?

    DEFF Research Database (Denmark)

    Rodríguez, Estefanía Cortés; Song, Wookeun; Brunskog, Jonas

    2011-01-01

    Humans are annoyed when they are exposed to environmental noise. Traditional measures such as sound pressure levels may not correlate well with how humans perceive annoyance, therefore it is important to investigate psychoacoustic metrics that may correlate better with the perceived annoyance...... of environmental noise than the A-weighted equivalent sound pressure level. This study examined whether the use of binaural recordings of sound events improves the correlation between the objective metrics and the perceived annoyance, particularly for road traffic noise. Metrics based on measurement with a single...... microphone and on binaural sound field recordings have been examined and compared. In order to acquire data for the subjective perception of annoyance, a series of listening tests has been carried out. It is concluded that binaural loudness metrics from binaural recordings are better correlated...

  2. COMPARISON OF 2D AND 3D VIDEO DISPLAYS FOR TEACHING VITREORETINAL SURGERY.

    Science.gov (United States)

    Chhaya, Nisarg; Helmy, Omar; Piri, Niloofar; Palacio, Agustina; Schaal, Shlomit

    2017-07-11

    To compare medical students' learning uptake and understanding of vitreoretinal surgeries by watching either 2D or 3D video recordings. Three vitreoretinal procedures (tractional retinal detachment, exposed scleral buckle removal, and four-point scleral fixation of an intraocular lens [TSS]) were recorded simultaneously with a conventional recorder for two-dimensional viewing and a VERION 3D HD system using Sony HVO-1000MD for three-dimensional viewing. Two videos of each surgery, one 2D and the other 3D, were edited to have the same content side by side. One hundred UMass medical students randomly assigned to a 2D group or 3D, then watched corresponding videos on a MacBook. All groups wore BiAL Red-blue 3D glasses and were appropriately randomized. Students filled out questionnaires about surgical steps or anatomical relationships of the pathologies or tissues, and their answers were compared. There was no significant difference in comprehension between the two groups for the extraocular scleral buckle procedure. However, for the intraocular TSS and tractional retinal detachment videos, the 3D group performed better than 2D (P < 0.05) on anatomy comprehension questions. Three-dimensional videos may have value in teaching intraocular ophthalmic surgeries. Surgical procedure steps and basic ocular anatomy may have to be reviewed to ensure maximal teaching efficacy.

  3. Daily Digest Generation of Kindergartner from Surveillance Video

    Science.gov (United States)

    Ishikawa, Tomoya; Wang, Yu; Kato, Jien

    Nowadays, children spend most of their time in kindergarten as well as nursery schools. This directly brings a requirement to the parents: they want to see how everyday goes with their kids. To meet this requirement, in this paper, we propose a method to automatically generate video digest that records kids' daily life in kindergarten. Our method involves two steps. The first is to efficiently narrow down the searching space by analyzing the noisy RFID tag log which records kids' temporal location, while the second is to use visual features and time constrains to recognize events and pick out video segments for each individual event. The accuracy of our method was evaluated with quantitative experiment and the superior of the digest that generated by our method was confirmed via questionnaire survey.

  4. VideoSET: Video Summary Evaluation through Text

    OpenAIRE

    Yeung, Serena; Fathi, Alireza; Fei-Fei, Li

    2014-01-01

    In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text ...

  5. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity.

    Science.gov (United States)

    Buxton, Rachel; McKenna, Megan F; Clapp, Mary; Meyer, Erik; Stabenau, Erik; Angeloni, Lisa M; Crooks, Kevin; Wittemyer, George

    2018-04-20

    Passive acoustic monitoring has the potential to be a powerful approach for assessing biodiversity across large spatial and temporal scales. However, extracting meaningful information from recordings can be prohibitively time consuming. Acoustic indices offer a relatively rapid method for processing acoustic data and are increasingly used to characterize biological communities. We examine the ability of acoustic indices to predict the diversity and abundance of biological sounds within recordings. First we reviewed the acoustic index literature and found that over 60 indices have been applied to a range of objectives with varying success. We then implemented a subset of the most successful indices on acoustic data collected at 43 sites in temperate terrestrial and tropical marine habitats across the continental U.S., developing a predictive model of the diversity of animal sounds observed in recordings. For terrestrial recordings, random forest models using a suite of acoustic indices as covariates predicted Shannon diversity, richness, and total number of biological sounds with high accuracy (R 2 > = 0.94, mean squared error MSE indices assessed, roughness, acoustic activity, and acoustic richness contributed most to the predictive ability of models. Performance of index models was negatively impacted by insect, weather, and anthropogenic sounds. For marine recordings, random forest models predicted Shannon diversity, richness, and total number of biological sounds with low accuracy (R 2 = 195), indicating that alternative methods are necessary in marine habitats. Our results suggest that using a combination of relevant indices in a flexible model can accurately predict the diversity of biological sounds in temperate terrestrial acoustic recordings. Thus, acoustic approaches could be an important contribution to biodiversity monitoring in some habitats in the face of accelerating human-caused ecological change. This article is protected by copyright. All rights

  6. Video interviewing as a learning resource

    DEFF Research Database (Denmark)

    Hedemann, Lars; Søndergaard, Helle Alsted

    2011-01-01

    The present investigation was carried out as a pilot study, with the aim of obtaining exploratory insights into the field of learning, and more specifically, how the use of video technology can be used as a mean to excel the outcome of the learning process. The motivation behind the study has its...... basis in the management education literature, and thereby in the discussion of how to organize teaching, in order to equip students with improved skills in reflective realization. Following the notion that experience is the basis for knowledge, the study was set out to explore how students at higher...... education programmes, i.e. at MSc and MBA level, can benefit from utilizing video recorded interviews in their process of learning and reflection. On the basis of the study, it is suggested that video interviewing makes up an interesting alternative to other learning approaches such as Simulation...

  7. Understanding and crafting the mix the art of recording

    CERN Document Server

    Moylan, William

    2014-01-01

    Understanding and Crafting the Mix, 3rd edition provides the framework to identify, evaluate, and shape your recordings with clear and systematic methods. Featuring numerous exercises, this third edition allows you to develop critical listening and analytical skills to gain greater control over the quality of your recordings. Sample production sequences and descriptions of the recording engineer's role as composer, conductor, and performer provide you with a clear view of the entire recording process. Dr. William Moylan takes an inside look into a range of iconic popular music, thus offering insights into making meaningful sound judgments during recording. His unique focus on the aesthetic of recording and mixing will allow you to immediately and artfully apply his expertise while at the mixing desk. A companion website features recorded tracks to use in exercises, reference materials, additional examples of mixes and sound qualities, and mixed tracks.

  8. Computer-based video analysis identifies infants with absence of fidgety movements.

    Science.gov (United States)

    Støen, Ragnhild; Songstad, Nils Thomas; Silberg, Inger Elisabeth; Fjørtoft, Toril; Jensenius, Alexander Refsum; Adde, Lars

    2017-10-01

    BackgroundAbsence of fidgety movements (FMs) at 3 months' corrected age is a strong predictor of cerebral palsy (CP) in high-risk infants. This study evaluates the association between computer-based video analysis and the temporal organization of FMs assessed with the General Movement Assessment (GMA).MethodsInfants were eligible for this prospective cohort study if referred to a high-risk follow-up program in a participating hospital. Video recordings taken at 10-15 weeks post term age were used for GMA and computer-based analysis. The variation of the spatial center of motion, derived from differences between subsequent video frames, was used for quantitative analysis.ResultsOf 241 recordings from 150 infants, 48 (24.1%) were classified with absence of FMs or sporadic FMs using the GMA. The variation of the spatial center of motion (C SD ) during a recording was significantly lower in infants with normal (0.320; 95% confidence interval (CI) 0.309, 0.330) vs. absence of or sporadic (0.380; 95% CI 0.361, 0.398) FMs (P<0.001). A triage model with C SD thresholds chosen for sensitivity of 90% and specificity of 80% gave a 40% referral rate for GMA.ConclusionQuantitative video analysis during the FMs' period can be used to triage infants at high risk of CP to early intervention or observational GMA.

  9. Measurement and classification of heart and lung sounds by using LabView for educational use.

    Science.gov (United States)

    Altrabsheh, B

    2010-01-01

    This study presents the design, development and implementation of a simple low-cost method of phonocardiography signal detection. Human heart and lung signals are detected by using a simple microphone through a personal computer; the signals are recorded and analysed using LabView software. Amplitude and frequency analyses are carried out for various phonocardiography pathological cases. Methods for automatic classification of normal and abnormal heart sounds, murmurs and lung sounds are presented. Various cases of heart and lung sound measurement are recorded and analysed. The measurements can be saved for further analysis. The method in this study can be used by doctors as a detection tool aid and may be useful for teaching purposes at medical and nursing schools.

  10. Video stereopsis of cardiac MR images

    International Nuclear Information System (INIS)

    Johnson, R.F. Jr.; Norman, C.

    1988-01-01

    This paper describes MR images of the heart acquired using a spin-echo technique synchronized to the electrocardiogram. Sixteen 0.5-cm-thick sections with a 0.1-cm gap between each section were acquired in the coronal view to cover all the cardiac anatomy including vasculature. Two sets of images were obtained with a subject rotation corresponding to the stereoscopic viewing angle of the eyes. The images were digitized, spatially registered, and processed by a three-dimensional graphics work station for stereoscopic viewing. Video recordings were made of each set of images and then temporally synchronized to produce a single video image corresponding to the appropriate eye view

  11. Behavioral responses of silverback gorillas (Gorilla gorilla gorilla) to videos.

    Science.gov (United States)

    Maloney, Margaret A; Leighty, Katherine A; Kuhar, Christopher W; Bettinger, Tamara L

    2011-01-01

    This study examined the impact of video presentations on the behavior of 4 silverback, western lowland gorillas (Gorilla gorilla gorilla). On each of 5 occasions, gorillas viewed 6 types of videos (blue screen, humans, an all-male or mixed-sex group engaged in low activity, and an all-male or mixed-sex group engaged in agonistic behavior). The study recorded behavioral responses and watching rates. All gorillas preferred dynamic over static videos; 3 watched videos depicting gorillas significantly more than those depicting humans. Among the gorilla videos, the gorillas clearly preferred watching the mixed-sex group engaged in agonistic behavior; yet, this did not lead to an increase in aggression or behavior indicating agitation. Further, habituation to videos depicting gorillas did not occur. This supports the effectiveness of this form of enrichment, particularly for a nonhuman animal needing to be separated temporarily due to illness, shipment quarantine, social restructuring, or exhibit modification. Copyright © The Walt Disney Company®

  12. Manipulations of the features of standard video lottery terminal (VLT) games: effects in pathological and non-pathological gamblers.

    Science.gov (United States)

    Loba, P; Stewart, S H; Klein, R M; Blackburn, J R

    2001-01-01

    The present study was conducted to identify game parameters that would reduce the risk of abuse of video lottery terminals (VLTs) by pathological gamblers, while exerting minimal effects on the behavior of non-pathological gamblers. Three manipulations of standard VLT game features were explored. Participants were exposed to: a counter which displayed a running total of money spent; a VLT spinning reels game where participants could no longer "stop" the reels by touching the screen; and sensory feature manipulations. In control conditions, participants were exposed to standard settings for either a spinning reels or a video poker game. Dependent variables were self-ratings of reactions to each set of parameters. A set of 2(3) x 2 x 2 (game manipulation [experimental condition(s) vs. control condition] x game [spinning reels vs. video poker] x gambler status [pathological vs. non-pathological]) repeated measures ANOVAs were conducted on all dependent variables. The findings suggest that the sensory manipulations (i.e., fast speed/sound or slow speed/no sound manipulations) produced the most robust reaction differences. Before advocating harm reduction policies such as lowering sensory features of VLT games to reduce potential harm to pathological gamblers, it is important to replicate findings in a more naturalistic setting, such as a real bar.

  13. The insider's guide to home recording record music and get paid

    CERN Document Server

    Tarquin, Brian

    2015-01-01

    The last decade has seen an explosion in the number of home-recording studios. With the mass availability of sophisticated technology, there has never been a better time to do it yourself and make a profit.Take a studio journey with Brian Tarquin, the multiple-Emmy-award winning recording artist and producer, as he leads you through the complete recording process, and shows you how to perfect your sound using home equipment. He guides you through the steps to increase your creative freedom, and offers numerous tips to improve the effectiveness of your workflow. Topics covered in this book incl

  14. Generating OER by Recording Lectures: A Case Study

    Science.gov (United States)

    Llamas-Nistal, Martín; Mikic-Fonte, Fernando A.

    2014-01-01

    The University of Vigo, Vigo, Spain, has the objective of making all the teaching material generated by its teachers freely available. To attain this objective, it encourages the development of Open Educational Resources, especially videos. This paper presents an experience of recording lectures and generating the corresponding videos as a step…

  15. Development of an Amplifier for Electronic Stethoscope System and Heart Sound Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. J.; Kang, D. K. [Chongju University, Chongju (Korea)

    2001-05-01

    The conventional stethoscope can not store its stethoscopic sounds. Therefore a doctor diagnoses a patient with instantaneous stethoscopic sounds at that time, and he can not remember the state of the patient's stethoscopic sounds on the next. This prevent accurate and objective diagnosis. If the electronic stethoscope, which can store the stethoscopic sound, is developed, the auscultation will be greatly improved. This study describes an amplifier for electronic stethoscope system that can extract heart sounds of fetus as well as adult and allow us hear and record the sounds. Using the developed stethoscopic amplifier, clean heart sounds of fetus and adult can be heard in noisy environment, such as a consultation room of a university hospital, a laboratory of a university. Surprisingly, the heart sound of a 22-week fetus was heard through the developed electronic stethoscope. Pitch detection experiments using the detected heart sounds showed that the signal represents distinct periodicity. It can be expected that the developed electronic stethoscope can substitute for conventional stethoscopes and if proper analysis method for the stethoscopic signal is developed, a good electronic stethoscope system can be produced. (author). 17 refs., 6 figs.

  16. Perceived loudness of spatially distributed sound sources

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2005-01-01

    psychoacoustic attributes into account. Therefore, a method for deriving loudness maps was developed in an earlier study [Song, Internoise2004, paper 271]. The present experiment investigates to which extent perceived loudness depends on the distribution of individual sound sources. Three loudspeakers were...... positioned 1.5 m from the centre of the listener’s head, one straight ahead, and two 10 degrees to the right and left, respectively. Six participants matched the loudness of either one, or two simultaneous sounds (narrow-band noises with 1-kHz, and 3.15-kHz centre frequencies) to a 2-kHz, 60-dB SPL narrow......-band noise placed in the frontal loudspeaker. The two sounds were either originating from the central speaker, or from the two offset loudspeakers. It turned out that the subjects perceived the noises to be softer when they were distributed in space. In addition, loudness was calculated from the recordings...

  17. Dog-appeasing pheromone collars reduce sound-induced fear and anxiety in beagle dogs: a placebo-controlled study.

    Science.gov (United States)

    Landsberg, G M; Beck, A; Lopez, A; Deniaud, M; Araujo, J A; Milgram, N W

    2015-09-12

    The objective of the study was to assess the effects of a dog-appeasing pheromone (DAP) collar in reducing sound-induced fear and anxiety in a laboratory model of thunderstorm simulation. Twenty-four beagle dogs naïve to the current test were divided into two treatment groups (DAP and placebo) balanced on their fear score in response to a thunderstorm recording. Each group was then exposed to two additional thunderstorm simulation tests on consecutive days. Dogs were video-assessed by a trained observer on a 6-point scale for active, passive and global fear and anxiety (combined). Both global and active fear and anxiety scores were significantly improved during and following thunder compared with placebo on both test days. DAP significantly decreased global fear and anxiety across 'during' and 'post' thunder times when compared with baseline. There was no significant improvement in the placebo group from baseline on the test days. In addition, the DAP group showed significantly greater use of the hide box at any time with increased exposure compared with the placebo group. The DAP collar reduced the scores of fear and anxiety, and increased hide use in response to a thunder recording, possibly by counteracting noise-related increased reactivity. British Veterinary Association.

  18. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    Science.gov (United States)

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  19. An integrable, web-based solution for easy assessment of video-recorded performances

    DEFF Research Database (Denmark)

    Subhi, Yousif; Todsen, Tobias; Konge, Lars

    2014-01-01

    , and access to this information should be restricted to select personnel. A local software solution may also ease the need for customization to local needs and integration into existing user databases or project management software. We developed an integrable web-based solution for easy assessment of video...

  20. Video material and epilepsy.

    Science.gov (United States)

    Harding, G F; Jeavons, P M; Edson, A S

    1994-01-01

    Nine patients who had epileptic attacks while playing computer games were studied in the laboratory. Patients had an EEG recorded as well as their response to intermittent photic stimulation (IPS) at flash rates of 1-60 fps. In addition, pattern sensitivity was assessed in all patients by a gratings pattern. Only 2 patients had no previous history of convulsions, and only 2 had a normal basic EEG. All but 1 were sensitive to IPS, and all but 1 were pattern sensitive. Most patients were male, but although this appears to conflict with previously published literature results regarding the sex ratio in photosensitivity, it was due to the male predominance of video game usage. We compared our results with those reported in the literature. Diagnosing video game epilepsy requires performing an EEG with IPS and pattern stimulation. We propose a standard method of testing.

  1. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  2. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  3. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  4. Human and animal sounds influence recognition of body language.

    Science.gov (United States)

    Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice

    2008-11-25

    In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.

  5. Are YouTube videos accurate and reliable on basic life support and cardiopulmonary resuscitation?

    Science.gov (United States)

    Yaylaci, Serpil; Serinken, Mustafa; Eken, Cenker; Karcioglu, Ozgur; Yilmaz, Atakan; Elicabuk, Hayri; Dal, Onur

    2014-10-01

    The objective of this study is to investigate reliability and accuracy of the information on YouTube videos related to CPR and BLS in accord with 2010 CPR guidelines. YouTube was queried using four search terms 'CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support' between 2011 and 2013. Sources that uploaded the videos, the record time, the number of viewers in the study period, inclusion of human or manikins were recorded. The videos were rated if they displayed the correct order of resuscitative efforts in full accord with 2010 CPR guidelines or not. Two hundred and nine videos meeting the inclusion criteria after the search in YouTube with four search terms ('CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support') comprised the study sample subjected to the analysis. Median score of the videos is 5 (IQR: 3.5-6). Only 11.5% (n = 24) of the videos were found to be compatible with 2010 CPR guidelines with regard to sequence of interventions. Videos uploaded by 'Guideline bodies' had significantly higher rates of download when compared with the videos uploaded by other sources. Sources of the videos and date of upload (year) were not shown to have any significant effect on the scores received (P = 0.615 and 0.513, respectively). The videos' number of downloads did not differ according to the videos compatible with the guidelines (P = 0.832). The videos downloaded more than 10,000 times had a higher score than the others (P = 0.001). The majority of You-Tube video clips purporting to be about CPR are not relevant educational material. Of those that are focused on teaching CPR, only a small minority optimally meet the 2010 Resucitation Guidelines. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  6. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  7. Computerized Respiratory Sounds: Novel Outcomes for Pulmonary Rehabilitation in COPD.

    Science.gov (United States)

    Jácome, Cristina; Marques, Alda

    2017-02-01

    Computerized respiratory sounds are a simple and noninvasive measure to assess lung function. Nevertheless, their potential to detect changes after pulmonary rehabilitation (PR) is unknown and needs clarification if respiratory acoustics are to be used in clinical practice. Thus, this study investigated the short- and mid-term effects of PR on computerized respiratory sounds in subjects with COPD. Forty-one subjects with COPD completed a 12-week PR program and a 3-month follow-up. Secondary outcome measures included dyspnea, self-reported sputum, FEV 1 , exercise tolerance, self-reported physical activity, health-related quality of life, and peripheral muscle strength. Computerized respiratory sounds, the primary outcomes, were recorded at right/left posterior chest using 2 stethoscopes. Air flow was recorded with a pneumotachograph. Normal respiratory sounds, crackles, and wheezes were analyzed with validated algorithms. There was a significant effect over time in all secondary outcomes, with the exception of FEV 1 and of the impact domain of the St George Respiratory Questionnaire. Inspiratory and expiratory median frequencies of normal respiratory sounds in the 100-300 Hz band were significantly lower immediately (-2.3 Hz [95% CI -4 to -0.7] and -1.9 Hz [95% CI -3.3 to -0.5]) and at 3 months (-2.1 Hz [95% CI -3.6 to -0.7] and -2 Hz [95% CI -3.6 to -0.5]) post-PR. The mean number of expiratory crackles (-0.8, 95% CI -1.3 to -0.3) and inspiratory wheeze occupation rate (median 5.9 vs 0) were significantly lower immediately post-PR. Computerized respiratory sounds were sensitive to short- and mid-term effects of PR in subjects with COPD. These findings are encouraging for the clinical use of respiratory acoustics. Future research is needed to strengthen these findings and explore the potential of computerized respiratory sounds to assess the effectiveness of other clinical interventions in COPD. Copyright © 2017 by Daedalus Enterprises.

  8. Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system

    Directory of Open Access Journals (Sweden)

    Michael B. McCamy

    2013-02-01

    Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.

  9. Data Management Rubric for Video Data in Organismal Biology.

    Science.gov (United States)

    Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K

    2017-07-01

    Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata

  10. Concerns of Quality and Safety in Public Domain Surgical Education Videos: An Assessment of the Critical View of Safety in Frequently Used Laparoscopic Cholecystectomy Videos.

    Science.gov (United States)

    Deal, Shanley B; Alseidi, Adnan A

    2017-12-01

    Online videos are among the most common resources for case preparation. Using crowd sourcing, we evaluated the relationship between operative quality and viewing characteristics of online laparoscopic cholecystectomy videos. We edited 160 online videos of laparoscopic cholecystectomy to 60 seconds or less. Crowd workers (CW) rated videos using Global Objective Assessment of Laparoscopic Skills (GOALS), the critical view of safety (CVS) criteria, and assigned overall pass/fail ratings if CVS was achieved; linear mixed effects models derived average ratings. Views, likes, dislikes, subscribers, and country were recorded for subset analysis of YouTube videos. Spearman correlation coefficient (SCC) assessed correlation between performance measures. One video (0.06%) achieved a passing CVS score of ≥5; 23%, ≥4; 44%, ≥3; 79%, ≥2; and 100% ≥1. Pass/fail ratings correlated to CVS, SCC 0.95 (p quality. The average CVS and GOALS scores were no different for videos with >20,000 views (22%) compared with those with online surgical videos of LC. Favorable characteristics, such as number of views or likes, do not translate to higher quality. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  12. Digital Communication and Records in Service Provision and Supervision: Regulation and Practice.

    Science.gov (United States)

    Cavalari, Rachel N S; Gillis, Jennifer M; Kruser, Nathan; Romanczyk, Raymond G

    2015-10-01

    While the use of computer-based communication, video recordings, and other "electronic" records is commonplace in clinical service settings and research, management of digital records can become a great burden from both practical and regulatory perspectives. Three types of challenges commonly present themselves: regulatory requirements; storage, transmission, and access; and analysis for clinical and research decision-making. Unfortunately, few practitioners and organizations are well enough informed to set necessary policies and procedures in an effective, comprehensive manner. The three challenges are addressed using a demonstrative example of policies and procedural guidelines from an applied perspective, maintaining the unique emphasis behavior analysts place upon quantitative analysis. Specifically, we provide a brief review of federal requirements relevant to the use of video and electronic records in the USA; non-jargon pragmatic solutions to managing and storing video and electronic records; and last, specific methodologies to facilitate extraction of quantitative information in a cost-effective manner.

  13. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    Science.gov (United States)

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  14. Artificial neural networks for breathing and snoring episode detection in sleep sounds

    International Nuclear Information System (INIS)

    Emoto, Takahiro; Akutagawa, Masatake; Kinouchi, Yohsuke; Abeyratne, Udantha R; Chen, Yongjian; Kawata, Ikuji

    2012-01-01

    Obstructive sleep apnea (OSA) is a serious disorder characterized by intermittent events of upper airway collapse during sleep. Snoring is the most common nocturnal symptom of OSA. Almost all OSA patients snore, but not all snorers have the disease. Recently, researchers have attempted to develop automated snore analysis technology for the purpose of OSA diagnosis. These technologies commonly require, as the first step, the automated identification of snore/breathing episodes (SBE) in sleep sound recordings. Snore intensity may occupy a wide dynamic range (>95 dB) spanning from the barely audible to loud sounds. Low-intensity SBE sounds are sometimes seen buried within the background noise floor, even in high-fidelity sound recordings made within a sleep laboratory. The complexity of SBE sounds makes it a challenging task to develop automated snore segmentation algorithms, especially in the presence of background noise. In this paper, we propose a fundamentally novel approach based on artificial neural network (ANN) technology to detect SBEs. Working on clinical data, we show that the proposed method can detect SBE at a sensitivity and specificity exceeding 0.892 and 0.874 respectively, even when the signal is completely buried in background noise (SNR <0 dB). We compare the performance of the proposed technology with those of the existing methods (short-term energy, zero-crossing rates) and illustrate that the proposed method vastly outperforms conventional techniques. (paper)

  15. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  16. Evoked responses to sinusoidally modulated sound in unanaesthetized dogs

    NARCIS (Netherlands)

    Tielen, A.M.; Kamp, A.; Lopes da Silva, F.H.; Reneau, J.P.; Storm van Leeuwen, W.

    1. 1. Responses evoked by sinusoidally amplitude-modulated sound in unanaesthetized dogs have been recorded from inferior colliculus and from auditory cortex structures by means of chronically indwelling stainless steel wire electrodes. 2. 2. Harmonic analysis of the average responses demonstrated

  17. VNIIEF NMPC and A Maintenance Management Conference video surveillance

    International Nuclear Information System (INIS)

    Malone, T.

    1997-08-01

    This paper is part of ongoing Nuclear Materials Protection, Control and Accountability (NMPC and A) work with the All Russian Scientific Research Institute of Experimental Physics (VNIIEF), Sarov, Russia. The material presented in the paper is to provide guidance for the preparation of maintenance management for NMPC and A video assessment and surveillance subsystems being installed at VNIIEF. This paper discusses maintenance philosophies, performance testing, equipment inspection/setup, and record keeping for a video assessment and surveillance subsystem

  18. Understanding the Doppler effect by analysing spectrograms of the sound of a passing vehicle

    Science.gov (United States)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-11-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a classroom, both theoretically and experimentally, to deepen students’ understanding of the Doppler effect. Included are our own experimental data (48 sound recordings) to allow others to reproduce the analysis, if they cannot repeat the whole experiment themselves. In addition to its educational purpose, this paper examines the percentage errors in our results. This enabled us to determine sources of error, allowing those conducting similar future investigations to optimize their accuracy.

  19. Implication of Copyright Provisions for Literary Works in Films and ...

    African Journals Online (AJOL)

    The emphasis of copyright is on original literary works, films, sound recordings and others. The focus of this paper is to discuss the various provisions of the copyright law as they affect films, video and by extension video CD. The study examines the various interpretations of the provisions of the copyright law as they affect ...

  20. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    Directory of Open Access Journals (Sweden)

    Mari eTervaniemi

    2014-07-01

    Full Text Available Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel to compare memory-related MMN and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians. In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  1. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.

    Science.gov (United States)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  2. An Ethnografic Approach to Video Analysis

    DEFF Research Database (Denmark)

    Holck, Ulla

    2007-01-01

    The overall purpose in the ethnographic approach to video analysis is to become aware of implicit knowledge in those being observed. That is, knowledge that cannot be acquired through interviews. In music therapy this approach can be used to analyse patterns of interaction between client and ther......: Methods, Techniques and Applications in Music Therapy for Music Therapy Clinicians, Educators, Researchers and Students. London: Jessica Kingsley.......The overall purpose in the ethnographic approach to video analysis is to become aware of implicit knowledge in those being observed. That is, knowledge that cannot be acquired through interviews. In music therapy this approach can be used to analyse patterns of interaction between client...... a short introduction to the ethnographic approach, the workshop participants will have a chance to try out the method. First through a common exercise and then applied to video recordings of music therapy with children with severe communicative limitations. Focus will be on patterns of interaction...

  3. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Rojas Raul

    2007-01-01

    Full Text Available Abstract Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  4. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Raul Rojas

    2008-03-01

    Full Text Available Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  5. The effect of sound sources on soundscape appraisal

    NARCIS (Netherlands)

    van den Bosch, Kirsten; Andringa, Tjeerd

    2014-01-01

    In this paper we explore how the perception of sound sources (like traffic, birds, and the presence of distant people) influences the appraisal of soundscapes (as calm, lively, chaotic, or boring). We have used 60 one-minute recordings, selected from 21 days (502 hours) in March and July 2010.

  6. Video-based measurements for wireless capsule endoscope tracking

    International Nuclear Information System (INIS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions. (paper)

  7. Video-based measurements for wireless capsule endoscope tracking

    Science.gov (United States)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  8. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  9. A qualitative analysis of methotrexate self-injection education videos on YouTube.

    Science.gov (United States)

    Rittberg, Rebekah; Dissanayake, Tharindri; Katz, Steven J

    2016-05-01

    The aim of this study is to identify and evaluate the quality of videos for patients available on YouTube for learning to self-administer subcutaneous methotrexate. Using the search term "Methotrexate injection," two clinical reviewers analyzed the first 60 videos on YouTube. Source and search rank of video, audience interaction, video duration, and time since video was uploaded on YouTube were recorded. Videos were classified as useful, misleading, or a personal patient view. Videos were rated for reliability, comprehensiveness, and global quality scale (GQS). Reasons for misleading videos were documented, and patient videos were documented as being either positive or negative towards methotrexate (MTX) injection. Fifty-one English videos overlapped between the two geographic locations; 10 videos were classified as useful (19.6 %), 14 misleading (27.5 %), and 27 personal patient view (52.9 %). Total views of videos were 161,028: 19.2 % useful, 72.8 % patient, and 8.0 % misleading. Mean GQS: 4.2 (±1.0) useful, 1.6 (±1.1) misleading, and 2.0 (±0.9) for patient videos (p tool available, clinicians need to be familiar with specific resources to help guide and educate their patients to ensure best outcomes.

  10. A comparison of Google Glass and traditional video vantage points for bedside procedural skill assessment.

    Science.gov (United States)

    Evans, Heather L; O'Shea, Dylan J; Morris, Amy E; Keys, Kari A; Wright, Andrew S; Schaad, Douglas C; Ilgen, Jonathan S

    2016-02-01

    This pilot study assessed the feasibility of using first person (1P) video recording with Google Glass (GG) to assess procedural skills, as compared with traditional third person (3P) video. We hypothesized that raters reviewing 1P videos would visualize more procedural steps with greater inter-rater reliability than 3P rating vantages. Seven subjects performed simulated internal jugular catheter insertions. Procedures were recorded by both Google Glass and an observer's head-mounted camera. Videos were assessed by 3 expert raters using a task-specific checklist (CL) and both an additive- and summative-global rating scale (GRS). Mean scores were compared by t-tests. Inter-rater reliabilities were calculated using intraclass correlation coefficients. The 1P vantage was associated with a significantly higher mean CL score than the 3P vantage (7.9 vs 6.9, P = .02). Mean GRS scores were not significantly different. Mean inter-rater reliabilities for the CL, additive-GRS, and summative-GRS were similar between vantages. 1P vantage recordings may improve visualization of tasks for behaviorally anchored instruments (eg, CLs), whereas maintaining similar global ratings and inter-rater reliability when compared with conventional 3P vantage recordings. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  12. Behavioral response of manatees to variations in environmental sound levels

    Science.gov (United States)

    Miksis-Olds, Jennifer L.; Wagner, Tyler

    2011-01-01

    Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.

  13. Digital video and audio broadcasting technology a practical engineering guide

    CERN Document Server

    Fischer, Walter

    2010-01-01

    Digital Video and Audio Broadcasting Technology - A Practical Engineering Guide' deals with all the most important digital television, sound radio and multimedia standards such as MPEG, DVB, DVD, DAB, ATSC, T-DMB, DMB-T, DRM and ISDB-T. The book provides an in-depth look at these subjects in terms of practical experience. In addition it contains chapters on the basics of technologies such as analog television, digital modulation, COFDM or mathematical transformations between time and frequency domains. The attention in the respective field under discussion is focussed on aspects of measuring t

  14. The role of pars flaccida in human middle ear sound transmission.

    Science.gov (United States)

    Aritomo, H; Goode, R L; Gonzalez, J

    1988-04-01

    The role of the pars flaccida in middle ear sound transmission was studied with the use of twelve otoscopically normal, fresh, human temporal bones. Peak-to-peak umbo displacement in response to a constant sound pressure level at the tympanic membrane was measured with a noncontacting video measuring system capable of repeatable measurements down to 0.2 micron. Measurements were made before and after pars flaccida modifications at 18 frequencies between 100 and 4000 Hz. Four pars flaccida modifications were studied: (1) acoustic insulation of the pars flaccida to the ear canal with a silicone rubber baffle, (2) stiffening the pars flaccida with cyanoacrylate cement, (3) decreasing the tension of the pars flaccida with a nonperforating incision, and (4) perforation of the pars flaccida. All of the modifications (except the perforation) had a minimal effect on umbo displacement; this seems to imply that the pars flaccida has a minor acoustic role in human beings.

  15. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  16. Graphic recording of heart sounds in height native subjects

    OpenAIRE

    Rotta, Andrés; Ascenzo C., Jorge

    2014-01-01

    The phonocardiograms series obtained from normal subjects show that it is not always possible to record the noises Headset and 3rd, giving diverse enrollment rates by different authors. The reason why the graphic registration fails these noises largely normal individuals has not yet been explained in concrete terms, but allowed different influencing factors such as age, determinants of noises, terms of transmissibility chest wall sensitivity of the recording apparatus, etc. Los fonocardiog...

  17. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  18. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  19. Use of Blackboard Collaborate for Creation of a Video Course Library

    Science.gov (United States)

    Mitzova-Vladinov, Greta; Bizzio-Knott, Rossana; Hooshmand, Mary; Hauglum, Shayne; Aziza, Khitam

    2017-01-01

    This case study examines an innovative way the Blackboard Collaborate video conferencing learning platform was used to record graduate student presentations for creating a course library utilized in individualized student teaching. The presentation recordings evolved into an innovative strategy for providing feedback and ultimately improvement in…

  20. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.