WorldWideScience

Sample records for high quality audio

  1. Perceived Audio Quality Analysis in Digital Audio Broadcasting Plus System Based on PEAQ

    Directory of Open Access Journals (Sweden)

    K. Ulovec

    2018-04-01

    Full Text Available Broadcasters need to decide on bitrates of the services in the multiplex transmitted via Digital Audio Broadcasting Plus system. The bitrate should be set as low as possible for maximal number of services, but with high quality, not lower than in conventional analog systems. In this paper, the objective method Perceptual Evaluation of Audio Quality is used to analyze the perceived audio quality for appropriate codecs --- MP2 and AAC offering three profiles. The main aim is to determine dependencies on the type of signal --- music and speech, the number of channels --- stereo and mono, and the bitrate. Results indicate that only MP2 codec and AAC Low Complexity profile reach imperceptible quality loss. The MP2 codec needs higher bitrate than AAC Low Complexity profile for the same quality. For the both versions of AAC High-Efficiency profiles, the limit bitrates are determined above which less complex profiles outperform the more complex ones and higher bitrates above these limits are not worth using. It is shown that stereo music has worse quality than stereo speech generally, whereas for mono, the dependencies vary upon the codec/profile. Furthermore, numbers of services satisfying various quality criteria are presented.

  2. Parametric Packet-Layer Model for Evaluation Audio Quality in Multimedia Streaming Services

    Science.gov (United States)

    Egi, Noritsugu; Hayashi, Takanori; Takahashi, Akira

    We propose a parametric packet-layer model for monitoring audio quality in multimedia streaming services such as Internet protocol television (IPTV). This model estimates audio quality of experience (QoE) on the basis of quality degradation due to coding and packet loss of an audio sequence. The input parameters of this model are audio bit rate, sampling rate, frame length, packet-loss frequency, and average burst length. Audio bit rate, packet-loss frequency, and average burst length are calculated from header information in received IP packets. For sampling rate, frame length, and audio codec type, the values or the names used in monitored services are input into this model directly. We performed a subjective listening test to examine the relationships between these input parameters and perceived audio quality. The codec used in this test was the Advanced Audio Codec-Low Complexity (AAC-LC), which is one of the international standards for audio coding. On the basis of the test results, we developed an audio quality evaluation model. The verification results indicate that audio quality estimated by the proposed model has a high correlation with perceived audio quality.

  3. Predicting the Overall Spatial Quality of Automotive Audio Systems

    Science.gov (United States)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial

  4. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    Science.gov (United States)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  5. Ferrite bead effect on Class-D amplifier audio quality

    OpenAIRE

    Haddad , Kevin El; Mrad , Roberto; Morel , Florent; Pillonnet , Gael; Vollaire , Christian; Nagari , Angelo

    2014-01-01

    International audience; This paper studies the effect of ferrite beads on the audio quality of Class-D audio amplifiers. This latter is a switch-ing circuit which creates high frequency harmonics. Generally, a filter is used at the amplifier output for the sake of electro-magnetic compatibility (EMC). So often, in integrated solutions, this filter contains ferrite beads which are magnetic components and present nonlinear behavior. Time domain measurements and their equivalence in frequency do...

  6. Subband coding of digital audio signals without loss of quality

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Breeuwer, Marcel; van de Waal, Robbert

    1989-01-01

    A subband coding system for high quality digital audio signals is described. To achieve low bit rates at a high quality level, it exploits the simultaneous masking effect of the human ear. It is shown how this effect can be used in an adaptive bit-allocation scheme. The proposed approach has been

  7. Subjective and Objective Assessment of Perceived Audio Quality of Current Digital Audio Broadcasting Systems and Web-Casting Applications

    NARCIS (Netherlands)

    Pocta, P.; Beerends, J.G.

    2015-01-01

    This paper investigates the impact of different audio codecs typically deployed in current digital audio broadcasting (DAB) systems and web-casting applications, which represent a main source of quality impairment in these systems and applications, on the quality perceived by the end user. Both

  8. High-Fidelity Piezoelectric Audio Device

    Science.gov (United States)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  9. Automatic Organisation and Quality Analysis of User-Generated Content with Audio Fingerprinting

    OpenAIRE

    Cavaco, Sofia; Magalhaes, Joao; Mordido, Gonçalo

    2018-01-01

    The increase of the quantity of user-generated content experienced in social media has boosted the importance of analysing and organising the content by its quality. Here, we propose a method that uses audio fingerprinting to organise and infer the quality of user-generated audio content. The proposed method detects the overlapping segments between different audio clips to organise and cluster the data according to events, and to infer the audio quality of the samples. A test setup with conce...

  10. Spatial audio quality perception (part 2)

    DEFF Research Database (Denmark)

    Conetta, R.; Brookes, T.; Rumsey, F.

    2015-01-01

    location, envelopment, coverage angle, ensemble width, and spaciousness. They can also impact timbre, and changes to timbre can then influence spatial perception. Previously obtained data was used to build a regression model of perceived spatial audio quality in terms of spatial and timbral metrics...

  11. Objective Audio Quality Assessment Based on Spectro-Temporal Modulation Analysis

    OpenAIRE

    Guo, Ziyuan

    2011-01-01

    Objective audio quality assessment is an interdisciplinary research area that incorporates audiology and machine learning. Although much work has been made on the machine learning aspect, the audiology aspect also deserves investigation. This thesis proposes a non-intrusive audio quality assessment algorithm, which is based on an auditory model that simulates human auditory system. The auditory model is based on spectro-temporal modulation analysis of spectrogram, which has been proven to be ...

  12. Location audio simplified capturing your audio and your audience

    CERN Document Server

    Miles, Dean

    2014-01-01

    From the basics of using camera, handheld, lavalier, and shotgun microphones to camera calibration and mixer set-ups, Location Audio Simplified unlocks the secrets to clean and clear broadcast quality audio no matter what challenges you face. Author Dean Miles applies his twenty-plus years of experience as a professional location operator to teach the skills, techniques, tips, and secrets needed to produce high-quality production sound on location. Humorous and thoroughly practical, the book covers a wide array of topics, such as:* location selection* field mixing* boo

  13. The relationship between basic audio quality and overall listening experience.

    Science.gov (United States)

    Schoeffler, Michael; Herre, Jürgen

    2016-09-01

    Basic audio quality (BAQ) is a well-known perceptual attribute, which is rated in various listening test methods to measure the performance of audio systems. Unfortunately, when it comes to purchasing audio systems, BAQ might not have a significant influence on the customers' buying decisions since other factors, like brand loyalty, might be more important. In contrast to BAQ, overall listening experience (OLE) is an affective attribute which incorporates all aspects that are important to an individual assessor, including his or her preference for music genre and audio quality. In this work, the relationship between BAQ and OLE is investigated in more detail. To this end, an experiment was carried out, in which participants rated the BAQ and the OLE of music excerpts with different timbral and spatial degradations. In a between-group-design procedure, participants were assigned into two groups, in each of which a different set of stimuli was rated. The results indicate that rating of both attributes, BAQ and OLE, leads to similar rankings, even if a different set of stimuli is rated. In contrast to the BAQ ratings, which were more influenced by timbral than spatial degradations, the OLE ratings were almost equally influenced by timbral and spatial degradations.

  14. The complete guide to high-end audio

    CERN Document Server

    Harley, Robert

    2015-01-01

    An updated edition of what many consider the "bible of high-end audio"   In this newly revised and updated fifth edition, Robert Harley, editor in chief of the Absolute Sound magazine, tells you everything you need to know about buying and enjoying high-quality hi-fi. With this book, discover how to get the best sound for your money, how to identify the weak links in your system and upgrade where it will do the most good, how to set up and tweak your system for maximum performance, and how to become a more perceptive and appreciative listener. Just a few of the secrets you will learn cover hi

  15. Audio Quality Assurance : An Application of Cross Correlation

    DEFF Research Database (Denmark)

    Jurik, Bolette Ammitzbøll; Nielsen, Jesper Asbjørn Sindahl

    2012-01-01

    We describe algorithms for automated quality assurance on content of audio files in context of preservation actions and access. The algorithms use cross correlation to compare the sound waves. They are used to do overlap analysis in an access scenario, where preserved radio broadcasts are used in...

  16. Detecting double compression of audio signal

    Science.gov (United States)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  17. Measuring 3D Audio Localization Performance and Speech Quality of Conferencing Calls for a Multiparty Communication System

    Directory of Open Access Journals (Sweden)

    Mansoor Hyder

    2013-07-01

    Full Text Available Communication systems which support 3D (Three Dimensional audio offer a couple of advantages to the users/customers. Firstly, within the virtual acoustic environments all participants could easily be recognized through their placement/sitting positions. Secondly, all participants can turn their focus on any particular talker when multiple participants start talking at the same time by taking advantage of the natural listening tendency which is called the Cocktail Party Effect. On the other hand, 3D audio is known as a decreasing factor for overall speech quality because of the commencement of reverberations and echoes within the listening environment. In this article, we study the tradeoff between speech quality and human natural ability of localizing audio events/or talkers within our three dimensional audio supported telephony and teleconferencing solution. Further, we performed subjective user studies by incorporating two different HRTFs (Head Related Transfer Functions, different placements of the teleconferencing participants and different layouts of the virtual environments. Moreover, subjective user studies results for audio event localization and subjective speech quality are presented in this article. This subjective user study would help the research community to optimize the existing 3D audio systems and to design new 3D audio supported teleconferencing solutions based on the quality of experience requirements of the users/customers for agriculture personal in particular and for all potential users in general.

  18. Measuring 3D Audio Localization Performance and Speech Quality of Conferencing Calls for a Multiparty Communication System

    International Nuclear Information System (INIS)

    Hyder, M.; Menghwar, G.D.; Qureshi, A.

    2013-01-01

    Communication systems which support 3D (Three Dimensional) audio offer a couple of advantages to the users/customers. Firstly, within the virtual acoustic environments all participants could easily be recognized through their placement/sitting positions. Secondly, all participants can turn their focus on any particular talker when multiple participants start talking at the same time by taking advantage of the natural listening tendency which is called the Cocktail Party Effect. On the other hand, 3D audio is known as a decreasing factor for overall speech quality because of the commencement of reverberations and echoes within the listening environment. In this article, we study the tradeoff between speech quality and human natural ability of localizing audio events/or talkers within our three dimensional audio supported telephony and teleconferencing solution. Further, we performed subjective user studies by incorporating two different HRTFs (Head Related Transfer Functions), different placements of the teleconferencing participants and different layouts of the virtual environments. Moreover, subjective user studies results for audio event localization and subjective speech quality are presented in this article. This subjective user study would help the research community to optimize the existing 3D audio systems and to design new 3D audio supported teleconferencing solutions based on the quality of experience requirements of the users/customers for agriculture personal in particular and for all potential users in general. (author)

  19. Class D audio amplifiers for high voltage capacitive transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis

    of high volume, weight, and cost. High efficient class D amplifiers are now widely available offering power densities, that their linear counterparts can not match. Unlike the technology of audio amplifiers, the loudspeaker is still based on the traditional electrodynamic transducer invented by C.W. Rice......Audio reproduction systems contains two key components, the amplifier and the loudspeaker. In the last 20 – 30 years the technology of audio amplifiers have performed a fundamental shift of paradigm. Class D audio amplifiers have replaced the linear amplifiers, suffering from the well-known issues...... with the low level of acoustical output power and complex amplifier requirements, have limited the commercial success of the technology. Horn or compression drivers are typically favoured, when high acoustic output power is required, this is however at the expense of significant distortion combined...

  20. Improvements of ModalMax High-Fidelity Piezoelectric Audio Device

    Science.gov (United States)

    Woodard, Stanley E.

    2005-01-01

    ModalMax audio speakers have been enhanced by innovative means of tailoring the vibration response of thin piezoelectric plates to produce a high-fidelity audio response. The ModalMax audio speakers are 1 mm in thickness. The device completely supplants the need to have a separate driver and speaker cone. ModalMax speakers can perform the same applications of cone speakers, but unlike cone speakers, ModalMax speakers can function in harsh environments such as high humidity or extreme wetness. New design features allow the speakers to be completely submersed in salt water, making them well suited for maritime applications. The sound produced from the ModalMax audio speakers has sound spatial resolution that is readily discernable for headset users.

  1. High-Order Sparse Linear Predictors for Audio Processing

    DEFF Research Database (Denmark)

    Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll

    2010-01-01

    Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efficiently the different...

  2. Newnes audio and Hi-Fi engineer's pocket book

    CERN Document Server

    Capel, Vivian

    2013-01-01

    Newnes Audio and Hi-Fi Engineer's Pocket Book, Second Edition provides concise discussion of several audio topics. The book is comprised of 10 chapters that cover different audio equipment. The coverage of the text includes microphones, gramophones, compact discs, and tape recorders. The book also covers high-quality radio, amplifiers, and loudspeakers. The book then reviews the concepts of sound and acoustics, and presents some facts and formulas relevant to audio. The text will be useful to sound engineers and other professionals whose work involves sound systems.

  3. A Method to Detect AAC Audio Forgery

    Directory of Open Access Journals (Sweden)

    Qingzhong Liu

    2015-08-01

    Full Text Available Advanced Audio Coding (AAC, a standardized lossy compression scheme for digital audio, which was designed to be the successor of the MP3 format, generally achieves better sound quality than MP3 at similar bit rates. While AAC is also the default or standard audio format for many devices and AAC audio files may be presented as important digital evidences, the authentication of the audio files is highly needed but relatively missing. In this paper, we propose a scheme to expose tampered AAC audio streams that are encoded at the same encoding bit-rate. Specifically, we design a shift-recompression based method to retrieve the differential features between the re-encoded audio stream at each shifting and original audio stream, learning classifier is employed to recognize different patterns of differential features of the doctored forgery files and original (untouched audio files. Experimental results show that our approach is very promising and effective to detect the forgery of the same encoding bit-rate on AAC audio streams. Our study also shows that shift recompression-based differential analysis is very effective for detection of the MP3 forgery at the same bit rate.

  4. Implementing Audio-CASI on Windows’ Platforms

    Science.gov (United States)

    Cooley, Philip C.; Turner, Charles F.

    2011-01-01

    Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743

  5. A high efficiency PWM CMOS class-D audio power amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Zhangming; Liu Lianxi; Yang Yintang [Institute of Microelectronics, Xidian University, Xi' an 710071 (China); Lei Han, E-mail: zmyh@263.ne [Xi' an Power-Rail Micro Co., Ltd, Xi' an 710075 (China)

    2009-02-15

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 mum CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 muA. The active area of the class-D audio power amplifier is about 1.47 x 1.52 mm{sup 2}. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  6. A high efficiency PWM CMOS class-D audio power amplifier

    International Nuclear Information System (INIS)

    Zhu Zhangming; Liu Lianxi; Yang Yintang; Lei Han

    2009-01-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 μm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 μA. The active area of the class-D audio power amplifier is about 1.47 x 1.52 mm 2 . With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  7. A high efficiency PWM CMOS class-D audio power amplifier

    Science.gov (United States)

    Zhangming, Zhu; Lianxi, Liu; Yintang, Yang; Han, Lei

    2009-02-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 μm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 μA. The active area of the class-D audio power amplifier is about 1.47 × 1.52 mm2. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  8. Car audio using DSP for active sound control. DSP ni yoru active seigyo wo mochiita audio

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, K.; Asano, S.; Furukawa, N. (Mitsubishi Motor Corp., Tokyo (Japan))

    1993-06-01

    In the automobile cabin, there are some unique problems which spoil the quality of sound reproduction from audio equipment, such as the narrow space and/or the background noise. The audio signal processing by using DSP (digital signal processor) makes enable a solution to these problems. A car audio with a high amenity has been successfully made by the active sound control using DSP. The DSP consists of an adder, coefficient multiplier, delay unit, and connections. For the actual processing by DSP, are used functions, such as sound field correction, response and processing of noises during driving, surround reproduction, graphic equalizer processing, etc. High effectiveness of the method was confirmed through the actual driving evaluation test. The present paper describes the actual method of sound control technology using DSP. Especially, the dynamic processing of the noise during driving is discussed in detail. 1 ref., 12 figs., 1 tab.

  9. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  10. Radioactive Decay: Audio Data Collection

    Science.gov (United States)

    Struthers, Allan

    2009-01-01

    Many phenomena generate interesting audible time series. This data can be collected and processed using audio software. The free software package "Audacity" is used to demonstrate the process by recording, processing, and extracting click times from an inexpensive radiation detector. The high quality of the data is demonstrated with a simple…

  11. Audio Frequency Analysis in Mobile Phones

    Science.gov (United States)

    Aguilar, Horacio Munguía

    2016-01-01

    A new experiment using mobile phones is proposed in which its audio frequency response is analyzed using the audio port for inputting external signal and getting a measurable output. This experiment shows how the limited audio bandwidth used in mobile telephony is the main cause of the poor speech quality in this service. A brief discussion is…

  12. Towards Unification of Methods for Speech, Audio, Picture and Multimedia Quality Assessment

    DEFF Research Database (Denmark)

    Zielinski, S.; Rumsey, F.; Bech, Søren

    2015-01-01

    attempting to “bridge the gap” between the quality assessment methods used in various disciplines are indicated. Prospective challenges faced by researchers in the unification process are outlined. They include development of unified scales, defining unified anchors, integration of objective models......The paper addresses the need to develop unified methods for subjective and objective quality assessment across speech, audio, picture, and multimedia applications. Commonalities and differences between the currently used standards are overviewed. Examples of the already undertaken research...

  13. A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity

    Directory of Open Access Journals (Sweden)

    Maoshen Jia

    2017-12-01

    Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.

  14. Advances in audio source seperation and multisource audio content retrieval

    Science.gov (United States)

    Vincent, Emmanuel

    2012-06-01

    Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.

  15. A High-Voltage Class D Audio Amplifier for Dielectric Elastomer Transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis; Knott, Arnold; Andersen, Michael A. E.

    2014-01-01

    Dielectric Elastomer (DE) transducers have emerged as a very interesting alternative to the traditional electrodynamic transducer. Lightweight, small size and high maneuverability are some of the key features of the DE transducer. An amplifier for the DE transducer suitable for audio applications...... is proposed and analyzed. The amplifier addresses the issue of a high impedance load, ensuring a linear response over the midrange region of the audio bandwidth (100 Hz – 3.5 kHz). THD+N below 0.1% are reported for the ± 300 V prototype amplifier producing a maximum of 125 Var at a peak efficiency of 95 %....

  16. Do Live versus Audio-Recorded Narrative Stimuli Influence Young Children's Narrative Comprehension and Retell Quality?

    Science.gov (United States)

    Kim, Young-Suk Grace

    2016-01-01

    Purpose: The primary aim of the present study was to examine whether different ways of presenting narrative stimuli (i.e., live narrative stimuli versus audio-recorded narrative stimuli) influence children's performances on narrative comprehension and oral-retell quality. Method: Children in kindergarten (n = 54), second grade (n = 74), and fourth…

  17. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  18. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  19. On the relative importance of audio and video in the presence of packet losses

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Myakotnykh, Eugene

    2010-01-01

    In streaming applications, unequal protection of audio and video tracks may be necessary to maintain the optimal perceived overall quality. For this purpose, the application should be aware of the relative importance of audio and video in an audiovisual sequence. In this paper, we propose...... a subjective test arrangement for finding the optimal tradeoff between subjective audio and video qualities in situations when it is not possible to have perfect quality for both modalities concurrently. Our results show that content poses a significant impact on the preferred compromise between audio...... and video quality, but also that the currently used classification criteria for content are not sufficient to predict the users’ preference...

  20. Audio Papers

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh; Samson, Kristine

    2016-01-01

    With this special issue of Seismograf we are happy to present a new format of articles: Audio Papers. Audio papers resemble the regular essay or the academic text in that they deal with a certain topic of interest, but presented in the form of an audio production. The audio paper is an extension...

  1. Time-Scale Invariant Audio Data Embedding

    Directory of Open Access Journals (Sweden)

    Mansour Mohamed F

    2003-01-01

    Full Text Available We propose a novel algorithm for high-quality data embedding in audio. The algorithm is based on changing the relative length of the middle segment between two successive maximum and minimum peaks to embed data. Spline interpolation is used to change the lengths. To ensure smooth monotonic behavior between peaks, a hybrid orthogonal and nonorthogonal wavelet decomposition is used prior to data embedding. The possible data embedding rates are between 20 and 30 bps. However, for practical purposes, we use repetition codes, and the effective embedding data rate is around 5 bps. The algorithm is invariant after time-scale modification, time shift, and time cropping. It gives high-quality output and is robust to mp3 compression.

  2. New audio applications of beryllium metal

    International Nuclear Information System (INIS)

    Sato, M.

    1977-01-01

    The major applications of beryllium metal in the field of audio appliances are for the vibrating cones for the two types of speakers 'TWITTER' for high range sound and 'SQUAWKER' for mid range sound, and also for beryllium cantilever tube assembled in stereo cartridge. These new applications are based on the characteristic property of beryllium having high ratio of modulus of elasticity to specific gravity. The production of these audio parts is described, and the audio response is shown. (author)

  3. Structure Learning in Audio

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch

    By having information about the setting a user is in, a computer is able to make decisions proactively to facilitate tasks for the user. Two approaches are taken in this thesis to achieve more information about an audio environment. One approach is that of classifying audio, and a new approach...... investigated. A fast and computationally simple approach that compares recordings and classifies if they are from the same audio environment have been developed, and shows very high accuracy and the ability to synchronize recordings in the case of recording devices which are not connected. A more general model...

  4. Audio Mining with emphasis on Music Genre Classification

    DEFF Research Database (Denmark)

    Meng, Anders

    2004-01-01

    Audio is an important part of our daily life, basically it increases our impression of the world around us whether this is communication, music, danger detection etc. Currently the field of Audio Mining, which here includes areas of music genre, music recognition / retrieval, playlist generation...... the world the problem of detecting environments from the input audio is researched as to increase the life quality of hearing-impaired. Basically there is a lot of work within the field of audio mining. The presentation will mainly focus on music genre classification where we have a fixed amount of genres...... to choose from. Basically every audio mining system is more or less consisting of the same stages as for the music genre setting. My research so far has mainly focussed on finding relevant features for music genre classification living at different timescales using early and late information fusion. It has...

  5. Intelligent audio analysis

    CERN Document Server

    Schuller, Björn W

    2013-01-01

    This book provides the reader with the knowledge necessary for comprehension of the field of Intelligent Audio Analysis. It firstly introduces standard methods and discusses the typical Intelligent Audio Analysis chain going from audio data to audio features to audio recognition.  Further, an introduction to audio source separation, and enhancement and robustness are given. After the introductory parts, the book shows several applications for the three types of audio: speech, music, and general sound. Each task is shortly introduced, followed by a description of the specific data and methods applied, experiments and results, and a conclusion for this specific task. The books provides benchmark results and standardized test-beds for a broader range of audio analysis tasks. The main focus thereby lies on the parallel advancement of realism in audio analysis, as too often today’s results are overly optimistic owing to idealized testing conditions, and it serves to stimulate synergies arising from transfer of ...

  6. Do Live versus Audio-Recorded Narrative Stimuli Influence Young Children's Narrative Comprehension and Retell Quality?

    Science.gov (United States)

    Kim, Young-Suk Grace

    2016-01-01

    Purpose: The primary aim of the present study was to examine whether different ways of presenting narrative stimuli (i.e., live narrative stimuli versus audio-recorded narrative stimuli) influence children's performances on narrative comprehension and oral-retell quality. Method: Children in kindergarten (n = 54), second grade (n = 74), and fourth…

  7. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    Science.gov (United States)

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  8. Smartphone audio port data collection cookbook

    Directory of Open Access Journals (Sweden)

    Kyle Forinash

    2018-06-01

    Full Text Available The audio port of a smartphone is designed to send and receive audio but can be harnessed for portable, economical, and accurate data collection from a variety of sources. While smartphones have internal sensors to measure a number of physical phenomena such as acceleration, magnetism and illumination levels, measurement of other phenomena such as voltage, external temperature, or accurate timing of moving objects are excluded. The audio port cannot be only employed to sense external phenomena. It has the additional advantage of timing precision; because audio is recorded or played at a controlled rate separated from other smartphone activities, timings based on audio can be highly accurate. The following outlines unpublished details of the audio port technical elements for data collection, a general data collection recipe and an example timing application for Android devices.

  9. A review of lossless audio compression standards and algorithms

    Science.gov (United States)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  10. The audio expert everything you need to know about audio

    CERN Document Server

    Winer, Ethan

    2012-01-01

    The Audio Expert is a comprehensive reference that covers all aspects of audio, with many practical, as well as theoretical, explanations. Providing in-depth descriptions of how audio really works, using common sense plain-English explanations and mechanical analogies with minimal math, the book is written for people who want to understand audio at the deepest, most technical level, without needing an engineering degree. It's presented in an easy-to-read, conversational tone, and includes more than 400 figures and photos augmenting the text.The Audio Expert takes th

  11. Key factors for a high-quality VR experience

    Science.gov (United States)

    Champel, Mary-Luc; Doré, Renaud; Mollet, Nicolas

    2017-09-01

    For many years, Virtual Reality has been presented as a promising technology that could deliver a truly new experience to users. The media and entertainment industry is now investigating the possibility to offer a video-based VR 360 experience. Nevertheless, there is a substantial risk that VR 360 could have the same fate as 3DTV if it cannot offer more than just being the next fad. The present paper aims at presenting the various quality factors required for a high-quality VR experience. More specifically, this paper will focus on the main three VR quality pillars: visual, audio and immersion.

  12. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Umapathy Karthikeyan

    2007-01-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  13. High quality voice synthesis middle ware; Kohinshitsu onsei gosei middle war

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Toshiba Corp. newly developed a natural voice synthesis system, TOS Drive TTS (TOtally speaker Driven Text-To-Speech) system, in which natural high-quality read-aloud is greatly improved, and also developed as its application a voice synthesis middle ware. In the newly developed system, using as a model a narrator's voice recorded preliminarily, a metrical control dictionary is automatically learned that reproduces the characteristics of metrical patters such as intonation or rhythm of a human voice, as is a voice bases dictionary that reproduces the characteristics of a voice quality, enabling natural voice synthesis to be realized that picks up human voice characteristics. The system is high quality and also very compact, while the voice synthesis middle ware utilizing this technology is adaptable to various platforms such as MPU or OS. The system is very suitable for audio response in the ITS field having car navigation systems as the core; besides, expanded application is expected to an audio response system that used to employ a sound recording and reproducing system. (translated by NEDO)

  14. Audio Feedback -- Better Feedback?

    Science.gov (United States)

    Voelkel, Susanne; Mello, Luciane V.

    2014-01-01

    National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one…

  15. Sounding ruins: reflections on the production of an ‘audio drift’

    Science.gov (United States)

    Gallagher, Michael

    2014-01-01

    This article is about the use of audio media in researching places, which I term ‘audio geography’. The article narrates some episodes from the production of an ‘audio drift’, an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners’ attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies. PMID:29708107

  16. Audio Restoration

    Science.gov (United States)

    Esquef, Paulo A. A.

    The first reproducible recording of human voice was made in 1877 on a tinfoil cylinder phonograph devised by Thomas A. Edison. Since then, much effort has been expended to find better ways to record and reproduce sounds. By the mid-1920s, the first electrical recordings appeared and gradually took over purely acoustic recordings. The development of electronic computers, in conjunction with the ability to record data onto magnetic or optical media, culminated in the standardization of compact disc format in 1980. Nowadays, digital technology is applied to several audio applications, not only to improve the quality of modern and old recording/reproduction techniques, but also to trade off sound quality for less storage space and less taxing transmission capacity requirements.

  17. Frequency Hopping Method for Audio Watermarking

    Directory of Open Access Journals (Sweden)

    A. Anastasijević

    2012-11-01

    Full Text Available This paper evaluates the degradation of audio content for a perceptible removable watermark. Two different approaches to embedding the watermark in the spectral domain were investigated. The frequencies for watermark embedding are chosen according to a pseudorandom sequence making the methods robust. Consequentially, the lower quality audio can be used for promotional purposes. For a fee, the watermark can be removed with a secret watermarking key. Objective and subjective testing was conducted in order to measure degradation level for the watermarked music samples and to examine residual distortion for different parameters of the watermarking algorithm and different music genres.

  18. Carrier Distortion in Hysteretic Self-Oscillating Class-D Audio Power

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Kofod; Andersen, Michael A. E.

    2009-01-01

    An important distortion mechanism in hysteretic self-oscillating (SO) class-D (switch mode) power amplifiers-–carrier distortion-–is analyzed and an optimization method is proposed. This mechanism is an issue in any power amplifier application where a high degree of proportionality between input...... and output is required, such as in audio power amplifiers or xDSL drivers. From an average-mode point of view, carrier distortion is shown to be caused by nonlinear variation of the hysteretic comparator input average voltage with the output average voltage. This easily causes total harmonic distortion...... figures in excess of 0.1–0.2%, inadequate for high-quality audio applications. Carrier distortion is shown to be minimized when the feedback system is designed to provide a triangular carrier (sliding) signal at the input of a hysteretic comparator. The proposed optimization method is experimentally...

  19. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  20. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Karthikeyan Umapathy

    2007-08-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  1. Audio Conferencing Enhancements

    OpenAIRE

    VESTERINEN, LEENA

    2006-01-01

    Audio conferencing allows multiple people in distant locations to interact in a single voice call. Whilst it can be very useful service it also has several key disadvantages. This thesis study investigated the options for improving the user experience of the mobile teleconferencing applications. In particular, the use of 3D, spatial audio and visualinteractive functionality was investigated as the means of improving the intelligibility and audio perception during the audio...

  2. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  3. Investigation of Energy Consumption and Sound Quality for Class-D Audio Amplifiers using Tracking Power Supplies

    DEFF Research Database (Denmark)

    Yamauchi, Akira; Schneider, Henrik; Knott, Arnold

    2015-01-01

    power supply tracking and its influence on power losses, audio performance and environmental impact for a 130 W class-D amplifier prototype as well as a commercialized class-D amplifier. Both modeled and experimental results verify that a large improvement of efficiency can be achieved. The total...... harmonic is found to be unaffected by stepless power supply tracking due the high supply rejection ratio of the used amplifiers under test.......The main advantage of Class-D audio amplifiers is high efficiency which is often stated to be more than 90 % but at idle or low power levels the efficiency is much lower. The waste energy is an environmental concern, a concern in mobile applications where long battery operation is required...

  4. Audio Twister

    DEFF Research Database (Denmark)

    Cermak, Daniel; Moreno Garcia, Rodrigo; Monastiridis, Stefanos

    2015-01-01

    Daniel Cermak-Sassenrath, Rodrigo Moreno Garcia, Stefanos Monastiridis. Audio Twister. Installation. P-Hack Copenhagen 2015, Copenhagen, DK, Apr 24, 2015.......Daniel Cermak-Sassenrath, Rodrigo Moreno Garcia, Stefanos Monastiridis. Audio Twister. Installation. P-Hack Copenhagen 2015, Copenhagen, DK, Apr 24, 2015....

  5. Back to basics audio

    CERN Document Server

    Nathan, Julian

    1998-01-01

    Back to Basics Audio is a thorough, yet approachable handbook on audio electronics theory and equipment. The first part of the book discusses electrical and audio principles. Those principles form a basis for understanding the operation of equipment and systems, covered in the second section. Finally, the author addresses planning and installation of a home audio system.Julian Nathan joined the audio service and manufacturing industry in 1954 and moved into motion picture engineering and production in 1960. He installed and operated recording theaters in Sydney, Austra

  6. Self-oscillating modulators for direct energy conversion audio power amplifiers

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    Direct energy conversion audio power amplifier represents total integration of switching-mode power supply and Class D audio power amplifier into one compact stage, achieving high efficiency, high level of integration, low component count and eventually low cost. This paper presents how self-oscillating modulators can be used with the direct switching-mode audio power amplifier to improve its performance by providing fast hysteretic control with high power supply rejection ratio, open-loop stability and high bandwidth. Its operation is thoroughly analyzed and simulated waveforms of a prototype amplifier are presented. (au)

  7. Multi Carrier Modulation Audio Power Amplifier with Programmable Logic

    DEFF Research Database (Denmark)

    Christiansen, Theis; Andersen, Toke Meyer; Knott, Arnold

    2009-01-01

    While switch-mode audio power amplifiers allow compact implementations and high output power levels due to their high power efficiency, they are very well known for creating electromagnetic interference (EMI) with other electronic equipment. To lower the EMI of switch-mode (class D) audio power a...

  8. Audio power amplifier design handbook

    CERN Document Server

    Self, Douglas

    2013-01-01

    This book is essential for audio power amplifier designers and engineers for one simple reason...it enables you as a professional to develop reliable, high-performance circuits. The Author Douglas Self covers the major issues of distortion and linearity, power supplies, overload, DC-protection and reactive loading. He also tackles unusual forms of compensation and distortion produced by capacitors and fuses. This completely updated fifth edition includes four NEW chapters including one on The XD Principle, invented by the author, and used by Cambridge Audio. Cro

  9. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  10. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    Science.gov (United States)

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  11. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  12. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  13. Digital signal processor for silicon audio playback devices; Silicon audio saisei kikiyo digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)

  14. Microphone Handling Noise: Measurements of Perceptual Threshold and Effects on Audio Quality.

    Directory of Open Access Journals (Sweden)

    Paul Kendrick

    Full Text Available A psychoacoustic experiment was carried out to test the effects of microphone handling noise on perceived audio quality. Handling noise is a problem affecting both amateurs using their smartphones and cameras, as well as professionals using separate microphones and digital recorders. The noises used for the tests were measured from a variety of devices, including smartphones, laptops and handheld microphones. The signal features that characterise these noises are analysed and presented. The sounds include various types of transient, impact noises created by tapping or knocking devices, as well as more sustained sounds caused by rubbing. During the perceptual tests, listeners auditioned speech podcasts and were asked to rate the degradation of any unwanted sounds they heard. A representative design test methodology was developed that tried to encourage everyday rather than analytical listening. Signal-to-noise ratio (SNR of the handling noise events was shown to be the best predictor of quality degradation. Other factors such as noise type or background noise in the listening environment did not significantly affect quality ratings. Podcast, microphone type and reproduction equipment were found to be significant but only to a small extent. A model allowing the prediction of degradation from the SNR is presented. The SNR threshold at which 50% of subjects noticed handling noise was found to be 4.2 ± 0.6 dBA. The results from this work are important for the understanding of our perception of impact sound and resonant noises in recordings, and will inform the future development of an automated predictor of quality for handling noise.

  15. Self-oscillating modulators for direct energy conversion audio power amplifiers

    DEFF Research Database (Denmark)

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    Direct energy conversion audio power amplifier represents total integration of switching-mode power supply and Class D audio power amplifier into one compact stage, achieving high efficiency, high level of integration, low component count and eventually low cost. This paper presents how self-oscillating...

  16. An audio FIR-DAC in a BCD process for high power Class-D amplifiers

    NARCIS (Netherlands)

    Doorn, T.S.; van Tuijl, Adrianus Johannes Maria; Schinkel, Daniel; Annema, Anne J.; Berkhout, M.; Berkhout, M.; Nauta, Bram

    A 322 coefficient semi-digital FIR-DAC using a 1-bit PWM input signal was designed and implemented in a high voltage, audio power bipolar CMOS DMOS (BCD) process. This facilitates digital input signals for an analog class-D amplifier in BCD. The FIR-DAC performance depends on the ISI-resistant

  17. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  18. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio.......This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...

  19. Efficiency Optimization in Class-D Audio Amplifiers

    DEFF Research Database (Denmark)

    Yamauchi, Akira; Knott, Arnold; Jørgensen, Ivan Harald Holger

    2015-01-01

    This paper presents a new power efficiency optimization routine for designing Class-D audio amplifiers. The proposed optimization procedure finds design parameters for the power stage and the output filter, and the optimum switching frequency such that the weighted power losses are minimized under...... the given constraints. The optimization routine is applied to minimize the power losses in a 130 W class-D audio amplifier based on consumer behavior investigations, where the amplifier operates at idle and low power levels most of the time. Experimental results demonstrate that the optimization method can...... lead to around 30 % of efficiency improvement at 1.3 W output power without significant effects on both audio performance and the efficiency at high power levels....

  20. Audio-frequency noise emissions from high-voltage overhead power lines

    International Nuclear Information System (INIS)

    Semmler, M.; Straumann, U.; Roero, C.; Teich, T. H.

    2005-01-01

    This article discusses the noise-emissions caused by high-voltage overhead power lines that can occur under certain atmospheric conditions. These emissions, caused by electric discharges around the conductors, can achieve disturbing values, depending on the conditions prevailing at the time in question. The causes of the discharges are examined and the ionisation processes involved are looked at. The parameters influencing the discharges are discussed and measures that can be taken to reduce such audio-frequency emissions are looked at. The authors note that a reduction of peripheral field strengths can reduce emissions and that hydrophilic coatings can lead to faster reduction of such effects after rainfall

  1. A Joint Audio-Visual Approach to Audio Localization

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2015-01-01

    Localization of audio sources is an important research problem, e.g., to facilitate noise reduction. In the recent years, the problem has been tackled using distributed microphone arrays (DMA). A common approach is to apply direction-of-arrival (DOA) estimation on each array (denoted as nodes), a...... time-of-flight cameras. Moreover, we propose an optimal method for weighting such DOA and range information for audio localization. Our experiments on both synthetic and real data show that there is a clear, potential advantage of using the joint audiovisual localization framework....

  2. Modeling Audio Fingerprints : Structure, Distortion, Capacity

    NARCIS (Netherlands)

    Doets, P.J.O.

    2010-01-01

    An audio fingerprint is a compact low-level representation of a multimedia signal. An audio fingerprint can be used to identify audio files or fragments in a reliable way. The use of audio fingerprints for identification consists of two phases. In the enrollment phase known content is fingerprinted,

  3. Advances in audio watermarking based on singular value decomposition

    CERN Document Server

    Dhar, Pranab Kumar

    2015-01-01

    This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications.   ·         Features new methods of audio watermarking for copyright protection and ownership protection ·         Outl...

  4. Animation, audio, and spatial ability: Optimizing multimedia for scientific explanations

    Science.gov (United States)

    Koroghlanian, Carol May

    This study investigated the effects of audio, animation and spatial ability in a computer based instructional program for biology. The program presented instructional material via text or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a biology course were blocked by spatial ability and randomly assigned to one of four treatments (Text-Static Illustration Audio-Static Illustration, Text-Animation, Audio-Animation). The study examined the effects of instructional mode (Text vs. Audio), illustration mode (Static Illustration vs. Animation) and spatial ability (Low vs. High) on practice and posttest achievement, attitude and time. Results for practice achievement indicated that high spatial ability participants achieved more than low spatial ability participants. Similar results for posttest achievement and spatial ability were not found. Participants in the Static Illustration treatments achieved the same as participants in the Animation treatments on both the practice and posttest. Likewise, participants in the Text treatments achieved the same as participants in the Audio treatments on both the practice and posttest. In terms of attitude, participants responded favorably to the computer based instructional program. They found the program interesting, felt the static illustrations or animations made the explanations easier to understand and concentrated on learning the material. Furthermore, participants in the Animation treatments felt the information was easier to understand than participants in the Static Illustration treatments. However, no difference for any attitude item was found for participants in the Text as compared to those in the Audio treatments. Significant differences were found by Spatial Ability for three attitude items concerning concentration and interest. In all three items, the low spatial ability participants responded more positively

  5. Introduction to audio analysis a MATLAB approach

    CERN Document Server

    Giannakopoulos, Theodoros

    2014-01-01

    Introduction to Audio Analysis serves as a standalone introduction to audio analysis, providing theoretical background to many state-of-the-art techniques. It covers the essential theory necessary to develop audio engineering applications, but also uses programming techniques, notably MATLAB®, to take a more applied approach to the topic. Basic theory and reproducible experiments are combined to demonstrate theoretical concepts from a practical point of view and provide a solid foundation in the field of audio analysis. Audio feature extraction, audio classification, audio segmentation, au

  6. Roundtable Audio Discussion

    Directory of Open Access Journals (Sweden)

    Chris Bigum

    2007-01-01

    Full Text Available RoundTable on Technology, Teaching and Tools. This is a roundtable audio interview conducted by James Farmer, founder of Edublogs, with Anne Bartlett-Bragg (University of Technology Sydney and Chris Bigum (Deakin University. Skype was used to make and record the audio conference and the resulting sound file was edited by Andrew McLauchlan.

  7. The Effect of Audio and Animation in Multimedia Instruction

    Science.gov (United States)

    Koroghlanian, Carol; Klein, James D.

    2004-01-01

    This study investigated the effects of audio, animation, and spatial ability in a multimedia computer program for high school biology. Participants completed a multimedia program that presented content by way of text or audio with lean text. In addition, several instructional sequences were presented either with static illustrations or animations.…

  8. EXPERIMENTAL STUDIES FOR DEVELOPMENT HIGH-POWER AUDIO SPEAKER DEVICES PERFORMANCE USING PERMANENT NdFeB MAGNETS SPECIAL TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Constantin D. STĂNESCU

    2013-05-01

    Full Text Available In this paper the authors shows the research made for improving high-power audio speaker devices performance using permanent NdFeB magnets special technology. Magnetic losses inside these audio devices are due to mechanical system frictions and to thermal effect of Joules eddy currents. In this regard, by special technology, were made conical surfaces at top plate and center pin. Analysing results obtained by modelling the magnetic circuit finite element method using electronic software package,was measured increase efficiency by over 10 %, from 1,136T to13T.

  9. Music Identification System Using MPEG-7 Audio Signature Descriptors

    Science.gov (United States)

    You, Shingchern D.; Chen, Wei-Hwa; Chen, Woei-Kae

    2013-01-01

    This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query) audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system's database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control. PMID:23533359

  10. Music Identification System Using MPEG-7 Audio Signature Descriptors

    Directory of Open Access Journals (Sweden)

    Shingchern D. You

    2013-01-01

    Full Text Available This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system’s database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control.

  11. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  12. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    Science.gov (United States)

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, psimulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by

  13. Class-D audio amplifiers with negative feedback

    OpenAIRE

    Cox, Stephen M.; Candy, B. H.

    2006-01-01

    There are many different designs for audio amplifiers. Class-D, or switching, amplifiers generate their output signal in the form of a high-frequency square wave of variable duty cycle (ratio of on time to off time). The square-wave nature of the output allows a particularly efficient output stage, with minimal losses. The output is ultimately filtered to remove components of the spectrum above the audio range. Mathematical models are derived here for a variety of related class-D amplifier de...

  14. A second-order class-D audio amplifier

    OpenAIRE

    Cox, Stephen M.; Tan, M.T.; Yu, J.

    2011-01-01

    Class-D audio amplifiers are particularly efficient, and this efficiency has led to their ubiquity in a wide range of modern electronic appliances. Their output takes the form of a high-frequency square wave whose duty cycle (ratio of on-time to off-time) is modulated at low frequency according to the audio signal. A mathematical model is developed here for a second-order class-D amplifier design (i.e., containing one second-order integrator) with negative feedback. We derive exact expression...

  15. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    Science.gov (United States)

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  16. The Use of Audio and Animation in Computer Based Instruction.

    Science.gov (United States)

    Koroghlanian, Carol; Klein, James D.

    This study investigated the effects of audio, animation, and spatial ability in a computer-based instructional program for biology. The program presented instructional material via test or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a…

  17. A centralized audio presentation manager

    Energy Technology Data Exchange (ETDEWEB)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in the most perceptible manner through the use of a theoretically and empirically designed rule set.

  18. Instrumental Landing Using Audio Indication

    Science.gov (United States)

    Burlak, E. A.; Nabatchikov, A. M.; Korsun, O. N.

    2018-02-01

    The paper proposes an audio indication method for presenting to a pilot the information regarding the relative positions of an aircraft in the tasks of precision piloting. The implementation of the method is presented, the use of such parameters of audio signal as loudness, frequency and modulation are discussed. To confirm the operability of the audio indication channel the experiments using modern aircraft simulation facility were carried out. The simulated performed the instrument landing using the proposed audio method to indicate the aircraft deviations in relation to the slide path. The results proved compatible with the simulated instrumental landings using the traditional glidescope pointers. It inspires to develop the method in order to solve other precision piloting tasks.

  19. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  20. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  1. Engaging Students with Audio Feedback

    Science.gov (United States)

    Cann, Alan

    2014-01-01

    Students express widespread dissatisfaction with academic feedback. Teaching staff perceive a frequent lack of student engagement with written feedback, much of which goes uncollected or unread. Published evidence shows that audio feedback is highly acceptable to students but is underused. This paper explores methods to produce and deliver audio…

  2. Audio wiring guide how to wire the most popular audio and video connectors

    CERN Document Server

    Hechtman, John

    2012-01-01

    Whether you're a pro or an amateur, a musician or into multimedia, you can't afford to guess about audio wiring. The Audio Wiring Guide is a comprehensive, easy-to-use guide that explains exactly what you need to know. No matter the size of your wiring project or installation, this handy tool provides you with the essential information you need and the techniques to use it. Using The Audio Wiring Guide is like having an expert at your side. By following the clear, step-by-step directions, you can do professional-level work at a fraction of the cost.

  3. Harmonic Enhancement in Low Bitrate Audio Coding Using an Efficient Long-Term Predictor

    Directory of Open Access Journals (Sweden)

    Song Jeongook

    2010-01-01

    Full Text Available This paper proposes audio coding using an efficient long-term prediction method to enhance the perceptual quality of audio codecs to speech input signals at low bit-rates. The MPEG-4 AAC-LTP exploited a similar concept, but its improvement was not significant because of small prediction gain due to long prediction lags and aliased components caused by the transformation with a time-domain aliasing cancelation (TDAC technique. The proposed algorithm increases the prediction gain by employing a deharmonizing predictor and a long-term compensation filter. The look-back memory elements are first constructed by applying the de-harmonizing predictor to the input signal, then the prediction residual is encoded and decoded by transform audio coding. Finally, the long-term compensation filter is applied to the updated look-back memory of the decoded prediction residual to obtain synthesized signals. Experimental results show that the proposed algorithm has much lower spectral distortion and higher perceptual quality than conventional approaches especially for harmonic signals, such as voiced speech.

  4. Comparative evaluation of audio and audio - tactile methods to improve oral hygiene status of visually impaired school children

    OpenAIRE

    R Krishnakumar; Swarna Swathi Silla; Sugumaran K Durai; Mohan Govindarajan; Syed Shaheed Ahamed; Logeshwari Mathivanan

    2016-01-01

    Background: Visually impaired children are unable to maintain good oral hygiene, as their tactile abilities are often underdeveloped owing to their visual disturbances. Conventional brushing techniques are often poorly comprehended by these children and hence, it was decided to evaluate the effectiveness of audio and audio-tactile methods in improving the oral hygiene of these children. Objective: To evaluate and compare the effectiveness of audio and audio-tactile methods in improving oral h...

  5. WLAN Technologies for Audio Delivery

    Directory of Open Access Journals (Sweden)

    Nicolas-Alexander Tatlas

    2007-01-01

    Full Text Available Audio delivery and reproduction for home or professional applications may greatly benefit from the adoption of digital wireless local area network (WLAN technologies. The most challenging aspect of such integration relates the synchronized and robust real-time streaming of multiple audio channels to multipoint receivers, for example, wireless active speakers. Here, it is shown that current WLAN solutions are susceptible to transmission errors. A detailed study of the IEEE802.11e protocol (currently under ratification is also presented and all relevant distortions are assessed via an analytical and experimental methodology. A novel synchronization scheme is also introduced, allowing optimized playback for multiple receivers. The perceptual audio performance is assessed for both stereo and 5-channel applications based on either PCM or compressed audio signals.

  6. Mobile video-to-audio transducer and motion detection for sensory substitution

    Directory of Open Access Journals (Sweden)

    Maxime eAmbard

    2015-10-01

    Full Text Available Visuo-auditory sensory substitution systems are augmented reality devices that translate a video stream into an audio stream in order to help the blind in daily tasks requiring visuo-spatial information. In this work, we present both a new mobile device and a transcoding method specifically designed to sonify moving objects. Frame differencing is used to extract spatial features from the video stream and two-dimensional spatial information is converted into audio cues using pitch, interaural time difference and interaural level difference. Using numerical methods, we attempt to reconstruct visuo-spatial information based on audio signals generated from various video stimuli. We show that despite a contrasted visual background and a highly lossy encoding method, the information in the audio signal is sufficient to allow object localization, object trajectory evaluation, object approach detection, and spatial separation of multiple objects. We also show that this type of audio signal can be interpreted by human users by asking ten subjects to discriminate trajectories based on generated audio signals.

  7. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  8. Four-quadrant flyback converter for direct audio power amplification

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    This paper presents a bidirectional, four-quadrant yback converter for use in direct audio power amplication. When compared to the standard Class-D switching-mode audio power amplier with separate power supply, the proposed four-quadrant flyback converter provides simple and compact solution with high efciency, higher level of integration, lower component count, less board space and eventually lower cost. Both peak and average current-mode control for use with 4Q flyback power converters are described and compared. Integrated magnetics is presented which simplies the construction of the auxiliary power supplies for control biasing and isolated gate drives. The feasibility of the approach is proven on audio power amplier prototype for subwoofer applications. (au)

  9. Presence and the utility of audio spatialization

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2005-01-01

    The primary concern of this paper is whether the utility of audio spatialization, as opposed to the fidelity of audio spatialization, impacts presence. An experiment is reported that investigates the presence-performance relationship by decoupling spatial audio fidelity (realism) from task...... performance by varying the spatial fidelity of the audio independently of its relevance to performance on the search task that subjects were to perform. This was achieved by having conditions in which subjects searched for a music-playing radio (an active sound source) and having conditions in which...... supplied only nonattenuated audio was detrimental to performance. Even so, this group of subjects consistently had the largest increase in presence scores over the baseline experiment. Further, the Witmer and Singer (1998) presence questionnaire was more sensitive to whether the audio source was active...

  10. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  11. Robust audio-visual speech recognition under noisy audio-video conditions.

    Science.gov (United States)

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  12. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  13. Digital audio watermarking fundamentals, techniques and challenges

    CERN Document Server

    Xiang, Yong; Yan, Bin

    2017-01-01

    This book offers comprehensive coverage on the most important aspects of audio watermarking, from classic techniques to the latest advances, from commonly investigated topics to emerging research subdomains, and from the research and development achievements to date, to current limitations, challenges, and future directions. It also addresses key topics such as reversible audio watermarking, audio watermarking with encryption, and imperceptibility control methods. The book sets itself apart from the existing literature in three main ways. Firstly, it not only reviews classical categories of audio watermarking techniques, but also provides detailed descriptions, analysis and experimental results of the latest work in each category. Secondly, it highlights the emerging research topic of reversible audio watermarking, including recent research trends, unique features, and the potentials of this subdomain. Lastly, the joint consideration of audio watermarking and encryption is also reviewed. With the help of this...

  14. A New Principle for a High Efficiency Power Audio Amplifier for Use with a Digital Preamplifier

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1986-01-01

    The use of class-B and class-D amlifiers for converting digital audio signals to analog signals is discussed. It is shown that the class-D amplifier is unsuitable due to distortion. Therefore, a new principle involving a switch-mode power supply and a class-B amplifier is suggested. By regulating...... the supply voltage to the amplifier according to the amplitude of the audio signal, a higher efficiency than can be obtained by the current principles is achieved. The regulation can be done very efficiently by generating the control signal to the power supply in advance of the audio signal, made possible...

  15. A new principle for a high-efficiency power audio amplifier for use with a digital preamplifier

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1987-01-01

    The use of class-B and class-D amplifiers for converting digital audio signals to analog signals is discussed. It is shown that the class-D amplifier is unsuitable due to distortion. Therefore a new principle involving a switch-mode power supply and a class-B amplifier is suggested. By regulating...... the supply voltage to the amplifier according to the amplitude of the audio signal, a higher efficiency than can be obtained by the usual principles is achieved. The regulation can be done very efficiently by generating the control signal to the power supply in advance of the audio signal, made possible...

  16. A 240W Monolithic Class-D Audio Amplifier Output Stage

    DEFF Research Database (Denmark)

    Nyboe, Flemming; Kaya, Cetin; Risbo, Lars

    2006-01-01

    A single-channel class-D audio amplifier output stage outputs 240W undipped into 4Omega 0.1% open-loop THD+N allows using the device in a fully-digital audio signal path with no feedback. The output current capability is plusmn18A and the part is fabricated in a 0.4mum/1.8mum high-voltage Bi...

  17. Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Chua, Bee Yong; Nanopoulos, Alexandros

    2009-01-01

    -emotion. However, the real-time systems that retrieve music over large music databases, can achieve order of magnitude performance increase, if applying multidimensional indexing over a dimensionally reduced audio feature space. To meet this performance achievement, in this paper, extensive studies are conducted......Music expresses emotion. A number of audio extracted features have influence on the perceived emotional expression of music. These audio features generate a high-dimensional space, on which music similarity retrieval can be performed effectively, with respect to human perception of the music...... on a number of dimensionality reduction algorithms, including both classic and novel approaches. The paper clearly envisages which dimensionality reduction techniques on the considered audio feature space, can preserve in average the accuracy of the emotion-based music retrieval....

  18. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  19. Semantic Context Detection Using Audio Event Fusion

    Directory of Open Access Journals (Sweden)

    Cheng Wen-Huang

    2006-01-01

    Full Text Available Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model and discriminative (support vector machine (SVM approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  20. Distortion Estimation in Compressed Music Using Only Audio Fingerprints

    NARCIS (Netherlands)

    Doets, P.J.O.; Lagendijk, R.L.

    2008-01-01

    An audio fingerprint is a compact yet very robust representation of the perceptually relevant parts of an audio signal. It can be used for content-based audio identification, even when the audio is severely distorted. Audio compression changes the fingerprint slightly. We show that these small

  1. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  2. Single conversion audio amplifier and DC-AC converters with high performance and low complexity control scheme

    DEFF Research Database (Denmark)

    Poulsen, Søren; Andersen, Michael Andreas E.

    2004-01-01

    This paper proposes a novel control topology for a mains isolated single conversion audio amplifier and DC-AC converters. The topology is made for use in audio applications, and differs from prior art in terms of significantly reduced distortion as well as lower system complexity. The topology can...

  3. Elicitation of attributes for the evaluation of audio-on audio-interference

    DEFF Research Database (Denmark)

    Francombe, Jon; Mason, R.; Dewhirst, M.

    2014-01-01

    procedure was used to reduce these phrases into a comprehensive set of attributes. Groups of experienced and inexperienced listeners determined nine and eight attributes, respectively. These attribute sets were combined by the listeners to produce a final set of 12 attributes: masking, calming, distraction......An experiment to determine the perceptual attributes of the experience of listening to a target audio program in the presence of an audio interferer was performed. The first stage was a free elicitation task in which a total of 572 phrases were produced. In the second stage, a consensus vocabulary...

  4. Audible Aliasing Distortion in Digital Audio Synthesis

    Directory of Open Access Journals (Sweden)

    J. Schimmel

    2012-04-01

    Full Text Available This paper deals with aliasing distortion in digital audio signal synthesis of classic periodic waveforms with infinite Fourier series, for electronic musical instruments. When these waveforms are generated in the digital domain then the aliasing appears due to its unlimited bandwidth. There are several techniques for the synthesis of these signals that have been designed to avoid or reduce the aliasing distortion. However, these techniques have high computing demands. One can say that today's computers have enough computing power to use these methods. However, we have to realize that today’s computer-aided music production requires tens of multi-timbre voices generated simultaneously by software synthesizers and the most of the computing power must be reserved for hard-disc recording subsystem and real-time audio processing of many audio channels with a lot of audio effects. Trivially generated classic analog synthesizer waveforms are therefore still effective for sound synthesis. We cannot avoid the aliasing distortion but spectral components produced by the aliasing can be masked with harmonic components and thus made inaudible if sufficient oversampling ratio is used. This paper deals with the assessment of audible aliasing distortion with the help of a psychoacoustic model of simultaneous masking and compares the computing demands of trivial generation using oversampling with those of other methods.

  5. CERN automatic audio-conference service

    CERN Multimedia

    Sierra Moral, R

    2009-01-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first Euro...

  6. CERN automatic audio-conference service

    CERN Document Server

    Sierra Moral, R

    2010-01-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first Euro...

  7. The Safeguard of Audio Collections: A Computer Science Based Approach to Quality Control—The Case of the Sound Archive of the Arena di Verona

    Directory of Open Access Journals (Sweden)

    Federica Bressan

    2013-01-01

    Full Text Available In the field of multimedia, very little attention is given to the activities involved in the preservation of audio documents. At the same time, more and more archives storing audio and video documents face the problem of obsolescing and degrading media, which could largely benefit from the instruments and the methodologies of research in multimedia. This paper presents the methodology and the results of the Italian project REVIVAL, aimed at the development of a hardware/software platform to support the active preservation of the audio collection of the Fondazione Arena di Verona, one of the finest in Europe for the operatic genre, with a special attention on protocols and tools for quality control. On the scientific side, the most significant objectives achieved by the project are (i the setup of a working environment inside the archive, (ii the knowledge transfer to the archival personnel, (iii the realization of chemical analyses on magnetic tapes in collaboration with experts in the fields of materials science and chemistry, and (iv the development of original open-source software tools. On the cultural side, the recovery, the safeguard, and the access to unique copies of unpublished live recordings of artists the calibre of Domingo and Pavarotti are of great musicological and economical value.

  8. Debugging of Class-D Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Crone, Lasse; Pedersen, Jeppe Arnsdorf; Mønster, Jakob Døllner

    2012-01-01

    Determining and optimizing the performance of a Class-D audio power amplier can be very dicult without knowledge of the use of audio performance measuring equipment and of how the various noise and distortion sources in uence the audio performance. This paper gives an introduction on how to measure...

  9. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  10. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  11. Making the Switch to Digital Audio

    Directory of Open Access Journals (Sweden)

    Shannon Gwin Mitchell

    2004-12-01

    Full Text Available In this article, the authors describe the process of converting from analog to digital audio data. They address the step-by-step decisions that they made in selecting hardware and software for recording and converting digital audio, issues of system integration, and cost considerations. The authors present a brief description of how digital audio is being used in their current research project and how it has enhanced the “quality” of their qualitative research.

  12. Efficient Audio Power Amplification - Challenges

    DEFF Research Database (Denmark)

    Andersen, Michael Andreas E.

    2005-01-01

    For more than a decade efficient audio power amplification has evolved and today switch-mode audio power amplification in various forms are the state-of-the-art. The technical steps that lead to this evolution are described and in addition many of the challenges still to be faced and where...

  13. Optimal control of a high-frequency class-D amplifier

    DEFF Research Database (Denmark)

    Dahl, Nicolai J.; Iversen, Niels Elkjær; Knott, Arnold

    2018-01-01

    Control loops have been used with switch-mode audio amplifiers to improve the sound quality of the amplifier. Because these amplifiers use a high-frequency modulation, precautions in the controller design must be taken. Further, the quality factor of the output filter can have a great impact...... on the controller's capabilities to suppress noise and track the audio signal. In this paper design methods for modern control are presented. The control method proves to easily overcome the challenge of designing a good performing controller when the output filter has a high quality factor. The results show...... that the controller is able to produce a clear improvement in the Total Harmonic Distortion with up to a 30 times improvement compared to open-loop with a clear reduction in the noise. This places the audio quality on pair with current solutions....

  14. A 240W Monolithic Class-D Audio Amplifier Output Stage

    OpenAIRE

    Nyboe, Flemming; Kaya, Cetin; Risbo, Lars; Andreani, Pietro

    2006-01-01

    A single-channel class-D audio amplifier output stage outputs 240W undipped into 4Omega 0.1% open-loop THD+N allows using the device in a fully-digital audio signal path with no feedback. The output current capability is plusmn18A and the part is fabricated in a 0.4mum/1.8mum high-voltage BiCMOS process. Over-current sensing protects the output from short circuits.

  15. Coexistence issues for a 2.4 GHz wireless audio streaming in presence of bluetooth paging and WLAN

    Science.gov (United States)

    Pfeiffer, F.; Rashwan, M.; Biebl, E.; Napholz, B.

    2015-11-01

    Nowadays, customers expect to integrate their mobile electronic devices (smartphones and laptops) in a vehicle to form a wireless network. Typically, IEEE 802.11 is used to provide a high-speed wireless local area network (WLAN) and Bluetooth is used for cable replacement applications in a wireless personal area network (PAN). In addition, Daimler uses KLEER as third wireless technology in the unlicensed (UL) 2.4 GHz-ISM-band to transmit full CD-quality digital audio. As Bluetooth, IEEE 802.11 and KLEER are operating in the same frequency band, it has to be ensured that all three technologies can be used simultaneously without interference. In this paper, we focus on the impact of Bluetooth and IEEE 802.11 as interferer in presence of a KLEER audio transmission.

  16. Consequence of audio visual collection in school libraries

    OpenAIRE

    Kuri, Ramesh

    2016-01-01

    The collection of Audio-Visual in library plays important role in teaching and learning. The importance of audio visual (AV) technology in education should not be underestimated. If audio-visual collection in library is carefully planned and designed, it can provide a rich learning environment. In this article, an author discussed the consequences of Audio-Visual collection in libraries especially for students of school library

  17. A compact electroencephalogram recording device with integrated audio stimulation system

    Science.gov (United States)

    Paukkunen, Antti K. O.; Kurttio, Anttu A.; Leminen, Miika M.; Sepponen, Raimo E.

    2010-06-01

    A compact (96×128×32 mm3, 374 g), battery-powered, eight-channel electroencephalogram recording device with an integrated audio stimulation system and a wireless interface is presented. The recording device is capable of producing high-quality data, while the operating time is also reasonable for evoked potential studies. The effective measurement resolution is about 4 nV at 200 Hz sample rate, typical noise level is below 0.7 μVrms at 0.16-70 Hz, and the estimated operating time is 1.5 h. An embedded audio decoder circuit reads and plays wave sound files stored on a memory card. The activities are controlled by an 8 bit main control unit which allows accurate timing of the stimuli. The interstimulus interval jitter measured is less than 1 ms. Wireless communication is made through bluetooth and the data recorded are transmitted to an external personal computer (PC) interface in real time. The PC interface is implemented with LABVIEW® and in addition to data acquisition it also allows online signal processing, data storage, and control of measurement activities such as contact impedance measurement, for example. The practical application of the device is demonstrated in mismatch negativity experiment with three test subjects.

  18. Efficient audio power amplification - challenges

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, Michael A.E.

    2005-07-01

    For more than a decade efficient audio power amplification has evolved and today switch-mode audio power amplification in various forms are the state-of-the-art. The technical steps that lead to this evolution are described and in addition many of the challenges still to be faced and where extensive research and development are needed is covered. (au)

  19. Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.

    Science.gov (United States)

    Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang

    2015-08-01

    We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep.

  20. Detection Of Alterations In Audio Files Using Spectrograph Analysis

    Directory of Open Access Journals (Sweden)

    Anandha Krishnan G

    2015-08-01

    Full Text Available The corresponding study was carried out to detect changes in audio file using spectrograph. An audio file format is a file format for storing digital audio data on a computer system. A sound spectrograph is a laboratory instrument that displays a graphical representation of the strengths of the various component frequencies of a sound as time passes. The objectives of the study were to find the changes in spectrograph of audio after altering them to compare altering changes with spectrograph of original files and to check for similarity and difference in mp3 and wav. Five different alterations were carried out on each audio file to analyze the differences between the original and the altered file. For altering the audio file MP3 or WAV by cutcopy the file was opened in Audacity. A different audio was then pasted to the audio file. This new file was analyzed to view the differences. By adjusting the necessary parameters the noise was reduced. The differences between the new file and the original file were analyzed. By adjusting the parameters from the dialog box the necessary changes were made. The edited audio file was opened in the software named spek where after analyzing a graph is obtained of that particular file which is saved for further analysis. The original audio graph received was combined with the edited audio file graph to see the alterations.

  1. AKTIVITAS SEKUNDER AUDIO UNTUK MENJAGA KEWASPADAAN PENGEMUDI MOBIL INDONESIA

    Directory of Open Access Journals (Sweden)

    Iftikar Zahedi Sutalaksana

    2013-03-01

    Full Text Available Tingkat kecelakaan lalu lintas yang melibatkan mobil di Indonesia semakin mengkhawatirkan. Tingginya peran faktor manusia sebagai penyebab utama kejadian kecelakaan patut diperhatikan. Penurunan kewaspadaan saat mengemudi akibat kantuk atau kelelahan merupakan salah satu kondisi yang mendorong terjadinya kecelakaan. Tulisan ini memaparkan aplikasi audio response test sebagai aktivitas sekunder dalam mengemudikan mobil. Response test yang dimaksud merupakan seperangkat aplikasi pada dashboard mobil yang menuntut respon pengemudi setiap stimulus suara bekerja. Audio response test ini diusulkan sebagai pemantau tingkat kewaspadaan pengemudi selama berkendara. Kewaspadaan pengemudi merupakan kondisi selama berkendara yang terjaga, awas, dan mampu memproses semua stimulus dengan baik. Hasil studi ini menghasilkan suatu bentuk audio response test yang terintegrasi dengan sistem berkendara di dalam mobil. Sumber bunyi diperdengarkan dengan intensitas konstan antara 80-85 dB. Bunyi akan berhenti jika pengemudi memberikan respon atas stimulus suara tersebut. Response test ini dirancang untuk mampu memantau tingkat kewaspadaan pengemudi selama berkendara. Penerapannya diharapkan mampu membantu menekan tingkat kecelakaan lalu lintas di Indonesia. Kata kunci: mengemudi, aktivitas sekunder, audio, kewaspadaan, response test   Abstract   The level of traffic accidents involving cars in Indonesia increasingly alarming. The high role of the human factor as the main cause of accident noteworthy. Decreased alertness while driving due to sleepiness or fatigue is one of the conditions that led to the accident. This paper describes an audio application response test as a secondary activity of driving a car. Response test is a set of applications on the dashboard of a car that demands a response driver each stimulus voice work. Audio response was proposed as test monitors the driver's level of alertness while driving. Vigilance driver was driving conditions during

  2. AudioMUD: a multiuser virtual environment for blind people.

    Science.gov (United States)

    Sánchez, Jaime; Hassler, Tiago

    2007-03-01

    A number of virtual environments have been developed during the last years. Among them there are some applications for blind people based on different type of audio, from simple sounds to 3-D audio. In this study, we pursued a different approach. We designed AudioMUD by using spoken text to describe the environment, navigation, and interaction. We have also introduced some collaborative features into the interaction between blind users. The core of a multiuser MUD game is a networked textual virtual environment. We developed AudioMUD by adding some collaborative features to the basic idea of a MUD and placed a simulated virtual environment inside the human body. This paper presents the design and usability evaluation of AudioMUD. Blind learners were motivated when interacted with AudioMUD and helped to improve the interaction through audio and interface design elements.

  3. Real-Time Transmission and Storage of Video, Audio, and Health Data in Emergency and Home Care Situations

    Directory of Open Access Journals (Sweden)

    Riccardo Stagnaro

    2007-01-01

    Full Text Available The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS and wireless local area network (WLAN or WiFi for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.

  4. Audio Recording of Children with Dyslalia

    OpenAIRE

    Stefan Gheorghe Pentiuc; Maria D. Schipor; Ovidiu A. Schipor

    2008-01-01

    In this paper we present our researches regarding automat parsing of audio recordings. These recordings are obtained from children with dyslalia and are necessary for an accurate identification of speech problems. We develop a software application that helps parsing audio, real time, recordings.

  5. Parametric time-frequency domain spatial audio

    CERN Document Server

    Delikaris-Manias, Symeon; Politis, Archontis

    2018-01-01

    This book provides readers with the principles and best practices in spatial audio signal processing. It describes how sound fields and their perceptual attributes are captured and analyzed within the time-frequency domain, how essential representation parameters are coded, and how such signals are efficiently reproduced for practical applications. The book is split into four parts starting with an overview of the fundamentals. It then goes on to explain the reproduction of spatial sound before offering an examination of signal-dependent spatial filtering. The book finishes with coverage of both current and future applications and the direction that spatial audio research is heading in. Parametric Time-frequency Domain Spatial Audio focuses on applications in entertainment audio, including music, home cinema, and gaming--covering the capturing and reproduction of spatial sound as well as its generation, transduction, representation, transmission, and perception. This book will teach readers the tools needed...

  6. Editing Audio with Audacity

    Directory of Open Access Journals (Sweden)

    Brandon Walsh

    2016-08-01

    Full Text Available For those interested in audio, basic sound editing skills go a long way. Being able to handle and manipulate the materials can help you take control of your object of study: you can zoom in and extract particular moments to analyze, process the audio, and upload the materials to a server to compliment a blog post on the topic. On a more practical level, these skills could also allow you to record and package recordings of yourself or others for distribution. That guest lecture taking place in your department? Record it and edit it yourself! Doing so is a lightweight way to distribute resources among various institutions, and it also helps make the materials more accessible for readers and listeners with a wide variety of learning needs. In this lesson you will learn how to use Audacity to load, record, edit, mix, and export audio files. Sound editing platforms are often expensive and offer extensive capabilities that can be overwhelming to the first-time user, but Audacity is a free and open source alternative that offers powerful capabilities for sound editing with a low barrier for entry. For this lesson we will work with two audio files: a recording of Bach’s Goldberg Variations available from MusOpen and another recording of your own voice that will be made in the course of the lesson. This tutorial uses Audacity 2.1.2, released January 2016.

  7. AudioPairBank: Towards A Large-Scale Tag-Pair-Based Audio Content Analysis

    OpenAIRE

    Sager, Sebastian; Elizalde, Benjamin; Borth, Damian; Schulze, Christian; Raj, Bhiksha; Lane, Ian

    2016-01-01

    Recently, sound recognition has been used to identify sounds, such as car and river. However, sounds have nuances that may be better described by adjective-noun pairs such as slow car, and verb-noun pairs such as flying insects, which are under explored. Therefore, in this work we investigate the relation between audio content and both adjective-noun pairs and verb-noun pairs. Due to the lack of datasets with these kinds of annotations, we collected and processed the AudioPairBank corpus cons...

  8. Tourism research and audio methods

    DEFF Research Database (Denmark)

    Jensen, Martin Trandberg

    2016-01-01

    Audio methods enriches sensuous tourism ethnographies. • The note suggests five research avenues for future auditory scholarship. • Sensuous tourism research has neglected the role of sounds in embodied tourism experiences.......• Audio methods enriches sensuous tourism ethnographies. • The note suggests five research avenues for future auditory scholarship. • Sensuous tourism research has neglected the role of sounds in embodied tourism experiences....

  9. DAFX Digital Audio Effects

    CERN Document Server

    2011-01-01

    The rapid development in various fields of Digital Audio Effects, or DAFX, has led to new algorithms and this second edition of the popular book, DAFX: Digital Audio Effects has been updated throughout to reflect progress in the field. It maintains a unique approach to DAFX with a lecture-style introduction into the basics of effect processing. Each effect description begins with the presentation of the physical and acoustical phenomena, an explanation of the signal processing techniques to achieve the effect, followed by a discussion of musical applications and the control of effect parameter

  10. AUTOMATIC SEGMENTATION OF BROADCAST AUDIO SIGNALS USING AUTO ASSOCIATIVE NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2010-12-01

    Full Text Available In this paper, we describe automatic segmentation methods for audio broadcast data. Today, digital audio applications are part of our everyday lives. Since there are more and more digital audio databases in place these days, the importance of effective management for audio databases have become prominent. Broadcast audio data is recorded from the Television which comprises of various categories of audio signals. Efficient algorithms for segmenting the audio broadcast data into predefined categories are proposed. Audio features namely Linear prediction coefficients (LPC, Linear prediction cepstral coefficients, and Mel frequency cepstral coefficients (MFCC are extracted to characterize the audio data. Auto Associative Neural Networks are used to segment the audio data into predefined categories using the extracted features. Experimental results indicate that the proposed algorithms can produce satisfactory results.

  11. 47 CFR 10.520 - Common audio attention signal.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  12. Audio-Biofeedback training for posture and balance in Patients with Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Zijlstra Wiebren

    2011-06-01

    Full Text Available Abstract Background Patients with Parkinson's disease (PD suffer from dysrhythmic and disturbed gait, impaired balance, and decreased postural responses. These alterations lead to falls, especially as the disease progresses. Based on the observation that postural control improved in patients with vestibular dysfunction after audio-biofeedback training, we tested the feasibility and effects of this training modality in patients with PD. Methods Seven patients with PD were included in a pilot study comprised of a six weeks intervention program. The training was individualized to each patient's needs and was delivered using an audio-biofeedback (ABF system with headphones. The training was focused on improving posture, sit-to-stand abilities, and dynamic balance in various positions. Non-parametric statistics were used to evaluate training effects. Results The ABF system was well accepted by all participants with no adverse events reported. Patients declared high satisfaction with the training. A significant improvement of balance, as assessed by the Berg Balance Scale, was observed (improvement of 3% p = 0.032, and a trend in the Timed up and go test (improvement of 11%; p = 0.07 was also seen. In addition, the training appeared to have a positive influence on psychosocial aspects of the disease as assessed by the Parkinson's disease quality of life questionnaire (PDQ-39 and the level of depression as assessed by the Geriatric Depression Scale. Conclusions This is, to our knowledge, the first report demonstrating that audio-biofeedback training for patients with PD is feasible and is associated with improvements of balance and several psychosocial aspects.

  13. Audio Recording of Children with Dyslalia

    Directory of Open Access Journals (Sweden)

    Stefan Gheorghe Pentiuc

    2008-01-01

    Full Text Available In this paper we present our researches regarding automat parsing of audio recordings. These recordings are obtained from children with dyslalia and are necessary for an accurate identification of speech problems. We develop a software application that helps parsing audio, real time, recordings.

  14. Audio Journal in an ELT Context

    Directory of Open Access Journals (Sweden)

    Neşe Aysin Siyli

    2012-09-01

    Full Text Available It is widely acknowledged that one of the most serious problems students of English as a foreign language face is their deprivation of practicing the language outside the classroom. Generally, the classroom is the sole environment where they can practice English, which by its nature does not provide rich setting to help students develop their competence by putting the language into practice. Motivated by this need, this descriptive study investigated the impact of audio dialog journals on students’ speaking skills. It also aimed to gain insights into students’ and teacher’s opinions on keeping audio dialog journals outside the class. The data of the study developed from student and teacher audio dialog journals, student written feedbacks, interviews held with the students, and teacher observations. The descriptive analysis of the data revealed that audio dialog journals served a number of functions ranging from cognitive to linguistic, from pedagogical to psychological, and social. The findings and pedagogical implications of the study are discussed in detail.

  15. Virtual Microphones for Multichannel Audio Resynthesis

    Directory of Open Access Journals (Sweden)

    Athanasios Mouchtaris

    2003-09-01

    Full Text Available Multichannel audio offers significant advantages for music reproduction, including the ability to provide better localization and envelopment, as well as reduced imaging distortion. On the other hand, multichannel audio is a demanding media type in terms of transmission requirements. Often, bandwidth limitations prohibit transmission of multiple audio channels. In such cases, an alternative is to transmit only one or two reference channels and recreate the rest of the channels at the receiving end. Here, we propose a system capable of synthesizing the required signals from a smaller set of signals recorded in a particular venue. These synthesized “virtual” microphone signals can be used to produce multichannel recordings that accurately capture the acoustics of that venue. Applications of the proposed system include transmission of multichannel audio over the current Internet infrastructure and, as an extension of the methods proposed here, remastering existing monophonic and stereophonic recordings for multichannel rendering.

  16. Multi Carrier Modulator for Switch-Mode Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Knott, Arnold; Pfaffinger, Gerhard; Andersen, Michael Andreas E.

    2008-01-01

    While switch-mode audio power amplifiers allow compact implementations and high output power levels due to their high power efficiency, they are very well known for creating electromagnetic interference (EMI) with other electronic equipment, in particular radio receivers. Lowering the EMI of swit...

  17. Music Genre Classification Using MIDI and Audio Features

    Science.gov (United States)

    Cataltepe, Zehra; Yaslan, Yusuf; Sonmez, Abdullah

    2007-12-01

    We report our findings on using MIDI files and audio features from MIDI, separately and combined together, for MIDI music genre classification. We use McKay and Fujinaga's 3-root and 9-leaf genre data set. In order to compute distances between MIDI pieces, we use normalized compression distance (NCD). NCD uses the compressed length of a string as an approximation to its Kolmogorov complexity and has previously been used for music genre and composer clustering. We convert the MIDI pieces to audio and then use the audio features to train different classifiers. MIDI and audio from MIDI classifiers alone achieve much smaller accuracies than those reported by McKay and Fujinaga who used not NCD but a number of domain-based MIDI features for their classification. Combining MIDI and audio from MIDI classifiers improves accuracy and gets closer to, but still worse, accuracies than McKay and Fujinaga's. The best root genre accuracies achieved using MIDI, audio, and combination of them are 0.75, 0.86, and 0.93, respectively, compared to 0.98 of McKay and Fujinaga. Successful classifier combination requires diversity of the base classifiers. We achieve diversity through using certain number of seconds of the MIDI file, different sample rates and sizes for the audio file, and different classification algorithms.

  18. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  19. High efficiency class-I audio power amplifier using a single adaptive supply

    International Nuclear Information System (INIS)

    Peng Zhenfei; Yang Shanshand; Feng Yong; Hong Zhiliang; Liu Yang

    2012-01-01

    A high efficiency class-I linear audio power amplifier (PA) with an adaptive supply is presented. Its efficiency is improved by a dynamic supply to reduce the power transistors' voltage drop. A gain compression technique is adopted to make the amplifier accommodate a single positive supply. Circuit complicity and chip area are reduced because no charge pump is necessary for the negative supply. A common shared mode voltage and a symmetric layout pattern are used to minimize the non-linearity. A peak efficiency of 80% is reached at peak output power. The measured THD+N before and after the supply switching point are 0.01% and 0.05%, respectively. The maximum output power is 410 mW for an 8 Ω speaker load. Unlike switching amplifiers, the class-I amplifier operates as a linear amplifier and hence has a low EMI. The advantage of a high efficiency and low EMI makes the class-I amplifier suitable for portable and RF sensitive applications. (semiconductor integrated circuits)

  20. Realtime Audio with Garbage Collection

    OpenAIRE

    Matheussen, Kjetil Svalastog

    2010-01-01

    Two non-moving concurrent garbage collectors tailored for realtime audio processing are described. Both collectors work on copies of the heap to avoid cache misses and audio-disruptive synchronizations. Both collectors are targeted at multiprocessor personal computers. The first garbage collector works in uncooperative environments, and can replace Hans Boehm's conservative garbage collector for C and C++. The collector does not access the virtual memory system. Neither doe...

  1. Audio-Visual Classification of Sports Types

    DEFF Research Database (Denmark)

    Gade, Rikke; Abou-Zleikha, Mohamed; Christensen, Mads Græsbøll

    2015-01-01

    In this work we propose a method for classification of sports types from combined audio and visual features ex- tracted from thermal video. From audio Mel Frequency Cepstral Coefficients (MFCC) are extracted, and PCA are applied to reduce the feature space to 10 dimensions. From the visual modali...

  2. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  3. System-Level Optimization of a DAC for Hearing-Aid Audio Class D Output Stage

    DEFF Research Database (Denmark)

    Pracný, Peter; Jørgensen, Ivan Harald Holger; Bruun, Erik

    2013-01-01

    This paper deals with system-level optimization of a digital-to-analog converter (DAC) for hearing-aid audio Class D output stage. We discuss the ΣΔ modulator system-level design parameters – the order, the oversampling ratio (OSR) and the number of bits in the quantizer. We show that combining...... by comparing two ΣΔ modulator designs. The proposed optimization has impact on the whole hearing-aid audio back-end system including less hardware in the interpolation filter and half the switching rate in the digital-pulse-width-modulation (DPWM) block and Class D output stage...... a reduction of the OSR with an increase of the order results in considerable power savings while the audio quality is kept. For further savings in the ΣΔ modulator, overdesign and subsequent coarse coefficient quantization are used. A figure of merit (FOM) is introduced to confirm this optimization approach...

  4. CERN automatic audio-conference service

    International Nuclear Information System (INIS)

    Sierra Moral, Rodrigo

    2010-01-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  5. CERN automatic audio-conference service

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Moral, Rodrigo, E-mail: Rodrigo.Sierra@cern.c [CERN, IT Department 1211 Geneva-23 (Switzerland)

    2010-04-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  6. CERN automatic audio-conference service

    Science.gov (United States)

    Sierra Moral, Rodrigo

    2010-04-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  7. Near-field Localization of Audio

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    Localization of audio sources using microphone arrays has been an important research problem for more than two decades. Many traditional methods for solving the problem are based on a two-stage procedure: first, information about the audio source, such as time differences-of-arrival (TDOAs......) and gain ratios-of-arrival (GROAs) between microphones is estimated, and, second, this knowledge is used to localize the audio source. These methods often have a low computational complexity, but this comes at the cost of a limited estimation accuracy. Therefore, we propose a new localization approach......, where the desired signal is modeled using TDOAs and GROAs, which are determined by the source location. This facilitates the derivation of one-stage, maximum likelihood methods under a white Gaussian noise assumption that is applicable in both near- and far-field scenarios. Simulations show...

  8. Musical Audio Synthesis Using Autoencoding Neural Nets

    OpenAIRE

    Sarroff, Andy; Casey, Michael A.

    2014-01-01

    With an optimal network topology and tuning of hyperpa-\\ud rameters, artificial neural networks (ANNs) may be trained\\ud to learn a mapping from low level audio features to one\\ud or more higher-level representations. Such artificial neu-\\ud ral networks are commonly used in classification and re-\\ud gression settings to perform arbitrary tasks. In this work\\ud we suggest repurposing autoencoding neural networks as\\ud musical audio synthesizers. We offer an interactive musi-\\ud cal audio synt...

  9. Spatial audio reproduction with primary ambient extraction

    CERN Document Server

    He, JianJun

    2017-01-01

    This book first introduces the background of spatial audio reproduction, with different types of audio content and for different types of playback systems. A literature study on the classical and emerging Primary Ambient Extraction (PAE) techniques is presented. The emerging techniques aim to improve the extraction performance and also enhance the robustness of PAE approaches in dealing with more complex signals encountered in practice. The in-depth theoretical study helps readers to understand the rationales behind these approaches. Extensive objective and subjective experiments validate the feasibility of applying PAE in spatial audio reproduction systems. These experimental results, together with some representative audio examples and MATLAB codes of the key algorithms, illustrate clearly the differences among various approaches and also help readers gain insights on selecting different approaches for different applications.

  10. Music Genre Classification Using MIDI and Audio Features

    Directory of Open Access Journals (Sweden)

    Abdullah Sonmez

    2007-01-01

    Full Text Available We report our findings on using MIDI files and audio features from MIDI, separately and combined together, for MIDI music genre classification. We use McKay and Fujinaga's 3-root and 9-leaf genre data set. In order to compute distances between MIDI pieces, we use normalized compression distance (NCD. NCD uses the compressed length of a string as an approximation to its Kolmogorov complexity and has previously been used for music genre and composer clustering. We convert the MIDI pieces to audio and then use the audio features to train different classifiers. MIDI and audio from MIDI classifiers alone achieve much smaller accuracies than those reported by McKay and Fujinaga who used not NCD but a number of domain-based MIDI features for their classification. Combining MIDI and audio from MIDI classifiers improves accuracy and gets closer to, but still worse, accuracies than McKay and Fujinaga's. The best root genre accuracies achieved using MIDI, audio, and combination of them are 0.75, 0.86, and 0.93, respectively, compared to 0.98 of McKay and Fujinaga. Successful classifier combination requires diversity of the base classifiers. We achieve diversity through using certain number of seconds of the MIDI file, different sample rates and sizes for the audio file, and different classification algorithms.

  11. Current-Driven Switch-Mode Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Knott, Arnold; Buhl, Niels Christian; Andersen, Michael A. E.

    2012-01-01

    The conversion of electrical energy into sound waves by electromechanical transducers is proportional to the current through the coil of the transducer. However virtually all audio power amplifiers provide a controlled voltage through the interface to the transducer. This paper is presenting...... a switch-mode audio power amplifier not only providing controlled current but also being supplied by current. This results in an output filter size reduction by a factor of 6. The implemented prototype shows decent audio performance with THD + N below 0.1 %....

  12. Multilevel tracking power supply for switch-mode audio power amplifiers

    DEFF Research Database (Denmark)

    Iversen, Niels Elkjær; Lazarevic, Vladan; Vasic, Miroslav

    2018-01-01

    to the power supply in order to improve efficiency. A 100 W prototype system was designed. Measured results show that systems employing envelope tracking can improve system efficiency from 2% to 12%, i.e. a factor of 6. The temperature rise is strongly reduced, especially for the switching power MOSFETs where......Switch-mode technology is the common choice for high efficiency audio power amplifiers. The dynamic nature of real audio reduces efficiency as less continuous output power can be achieved. Based on methods used for RF amplifiers this paper proposes to employ envelope tracking techniques...

  13. Augmenting Environmental Interaction in Audio Feedback Systems

    Directory of Open Access Journals (Sweden)

    Seunghun Kim

    2016-04-01

    Full Text Available Audio feedback is defined as a positive feedback of acoustic signals where an audio input and output form a loop, and may be utilized artistically. This article presents new context-based controls over audio feedback, leading to the generation of desired sonic behaviors by enriching the influence of existing acoustic information such as room response and ambient noise. This ecological approach to audio feedback emphasizes mutual sonic interaction between signal processing and the acoustic environment. Mappings from analyses of the received signal to signal-processing parameters are designed to emphasize this specificity as an aesthetic goal. Our feedback system presents four types of mappings: approximate analyses of room reverberation to tempo-scale characteristics, ambient noise to amplitude and two different approximations of resonances to timbre. These mappings are validated computationally and evaluated experimentally in different acoustic conditions.

  14. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    Science.gov (United States)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution

  15. Fusion of audio and visual cues for laughter detection

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audio- visual approach to distinguishing laughter from speech and we show that integrating the information from audio and video channels leads to improved performance over single-modal

  16. A listening test system for automotive audio - listeners

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Hegarty, Patrick; Christensen, Flemming

    2007-01-01

    A series of experiments was conducted in order to validate an experimental procedure to perform listening tests on car audio systems in a simulation of the car environment in a laboratory, using binaural synthesis with head-tracking. Seven experts and 40 non-expert listeners rated a range...... of stimuli for 15 sound-quality attributes developed by the experts. This paper presents a comparison between the attribute ratings from the two groups of participants. Overall preference of the non-experts was also measured using direct ratings as well as indirect scaling based on paired comparisons...

  17. Digital Augmented Reality Audio Headset

    Directory of Open Access Journals (Sweden)

    Jussi Rämö

    2012-01-01

    Full Text Available Augmented reality audio (ARA combines virtual sound sources with the real sonic environment of the user. An ARA system can be realized with a headset containing binaural microphones. Ideally, the ARA headset should be acoustically transparent, that is, it should not cause audible modification to the surrounding sound. A practical implementation of an ARA mixer requires a low-latency headphone reproduction system with additional equalization to compensate for the attenuation and the modified ear canal resonances caused by the headphones. This paper proposes digital IIR filters to realize the required equalization and evaluates a real-time prototype ARA system. Measurements show that the throughput latency of the digital prototype ARA system can be less than 1.4 ms, which is sufficiently small in practice. When the direct and processed sounds are combined in the ear, a comb filtering effect is brought about and appears as notches in the frequency response. The comb filter effect in speech and music signals was studied in a listening test and it was found to be inaudible when the attenuation is 20 dB. Insert ARA headphones have a sufficient attenuation at frequencies above about 1 kHz. The proposed digital ARA system enables several immersive audio applications, such as a virtual audio tourist guide and audio teleconferencing.

  18. Perceptual Audio Hashing Functions

    Directory of Open Access Journals (Sweden)

    Emin Anarım

    2005-07-01

    Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  19. Horatio Audio-Describes Shakespeare's "Hamlet": Blind and Low-Vision Theatre-Goers Evaluate an Unconventional Audio Description Strategy

    Science.gov (United States)

    Udo, J. P.; Acevedo, B.; Fels, D. I.

    2010-01-01

    Audio description (AD) has been introduced as one solution for providing people who are blind or have low vision with access to live theatre, film and television content. However, there is little research to inform the process, user preferences and presentation style. We present a study of a single live audio-described performance of Hart House…

  20. Audio localization for mobile robots

    OpenAIRE

    de Guillebon, Thibaut; Grau Saldes, Antoni; Bolea Monte, Yolanda

    2009-01-01

    The department of the University for which I worked is developing a project based on the interaction with robots in the environment. My work was to define an audio system for the robot. This audio system that I have to realize consists on a mobile head which is able to follow the sound in its environment. This subject was treated as a research problem, with the liberty to find and develop different solutions and make them evolve in the chosen way.

  1. EVALUASI KEPUASAN PENGGUNA TERHADAP APLIKASI AUDIO BOOKS

    Directory of Open Access Journals (Sweden)

    Raditya Maulana Anuraga

    2017-02-01

    Full Text Available Listeno is the first application audio books in Indonesia so that the users can get the book in audio form like listen to music, Listeno have problems in a feature request Listeno offline mode that have not been released, a security problem mp3 files that must be considered, and the target Listeno not yet reached 100,000 active users. This research has the objective to evaluate user satisfaction to Audio Books with research method approach, Nielsen. The analysis in this study using Importance Performance Analysis (IPA is combined with the index of User Satisfaction (IKP based on the indicators used are: Benefit (Usefulness, Utility (Utility, Usability (Usability, easy to understand (Learnability, Efficient (efficiency , Easy to remember (Memorability, Error (Error, and satisfaction (satisfaction. The results showed Applications User Satisfaction Audio books are quite satisfied with the results of the calculation IKP 69.58%..

  2. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    Science.gov (United States)

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  3. Joint evaluation of communication quality and user experience in an audio-visual virtual reality meeting

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Carrozzino, Marcello

    2013-01-01

    The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...

  4. Musical examination to bridge audio data and sheet music

    Science.gov (United States)

    Pan, Xunyu; Cross, Timothy J.; Xiao, Liangliang; Hei, Xiali

    2015-03-01

    The digitalization of audio is commonly implemented for the purpose of convenient storage and transmission of music and songs in today's digital age. Analyzing digital audio for an insightful look at a specific musical characteristic, however, can be quite challenging for various types of applications. Many existing musical analysis techniques can examine a particular piece of audio data. For example, the frequency of digital sound can be easily read and identified at a specific section in an audio file. Based on this information, we could determine the musical note being played at that instant, but what if you want to see a list of all the notes played in a song? While most existing methods help to provide information about a single piece of the audio data at a time, few of them can analyze the available audio file on a larger scale. The research conducted in this work considers how to further utilize the examination of audio data by storing more information from the original audio file. In practice, we develop a novel musical analysis system Musicians Aid to process musical representation and examination of audio data. Musicians Aid solves the previous problem by storing and analyzing the audio information as it reads it rather than tossing it aside. The system can provide professional musicians with an insightful look at the music they created and advance their understanding of their work. Amateur musicians could also benefit from using it solely for the purpose of obtaining feedback about a song they were attempting to play. By comparing our system's interpretation of traditional sheet music with their own playing, a musician could ensure what they played was correct. More specifically, the system could show them exactly where they went wrong and how to adjust their mistakes. In addition, the application could be extended over the Internet to allow users to play music with one another and then review the audio data they produced. This would be particularly

  5. Audio segmentation using Flattened Local Trimmed Range for ecological acoustic space analysis

    Directory of Open Access Journals (Sweden)

    Giovany Vega

    2016-06-01

    Full Text Available The acoustic space in a given environment is filled with footprints arising from three processes: biophony, geophony and anthrophony. Bioacoustic research using passive acoustic sensors can result in thousands of recordings. An important component of processing these recordings is to automate signal detection. In this paper, we describe a new spectrogram-based approach for extracting individual audio events. Spectrogram-based audio event detection (AED relies on separating the spectrogram into background (i.e., noise and foreground (i.e., signal classes using a threshold such as a global threshold, a per-band threshold, or one given by a classifier. These methods are either too sensitive to noise, designed for an individual species, or require prior training data. Our goal is to develop an algorithm that is not sensitive to noise, does not need any prior training data and works with any type of audio event. To do this, we propose: (1 a spectrogram filtering method, the Flattened Local Trimmed Range (FLTR method, which models the spectrogram as a mixture of stationary and non-stationary energy processes and mitigates the effect of the stationary processes, and (2 an unsupervised algorithm that uses the filter to detect audio events. We measured the performance of the algorithm using a set of six thoroughly validated audio recordings and obtained a sensitivity of 94% and a positive predictive value of 89%. These sensitivity and positive predictive values are very high, given that the validated recordings are diverse and obtained from field conditions. The algorithm was then used to extract audio events in three datasets. Features of these audio events were plotted and showed the unique aspects of the three acoustic communities.

  6. Removable Watermarking Sebagai Pengendalian Terhadap Cyber Crime Pada Audio Digital

    Directory of Open Access Journals (Sweden)

    Reyhani Lian Putri

    2017-08-01

    Full Text Available Perkembangan teknologi informasi yang pesat menuntut penggunanya untuk lebih berhati-hati seiring semakin meningkatnya cyber crime.Banyak pihak telah mengembangkan berbagai teknik perlindungan data digital, salah satunya adalah watermarking. Teknologi watermarking berfungsi untuk memberikan identitas, melindungi, atau menandai data digital, baik audio, citra, ataupun video, yang mereka miliki. Akan tetapi, teknik tersebut masih dapat diretas oleh oknum-oknum yang tidak bertanggung jawab.Pada penelitian ini, proses watermarking diterapkan pada audio digital dengan menyisipkan watermark yang terdengar jelas oleh indera pendengaran manusia (perceptible pada audio host.Hal ini bertujuan agar data audio dapat terlindungi dan apabila ada pihak lain yang ingin mendapatkan data audio tersebut harus memiliki “kunci” untuk menghilangkan watermark. Proses removable watermarking ini dilakukan pada data watermark yang sudah diketahui metode penyisipannya, agar watermark dapat dihilangkan sehingga kualitas audio menjadi lebih baik. Dengan menggunakan metode ini diperoleh kinerja audio watermarking pada nilai distorsi tertinggi dengan rata-rata nilai SNR sebesar7,834 dB dan rata-rata nilai ODG sebesar -3,77.Kualitas audio meningkat setelah watermark dihilangkan, di mana rata-rata SNR menjadi sebesar 24,986 dB dan rata-rata ODG menjadi sebesar -1,064 serta nilai MOS sebesar 4,40.

  7. A listening test system for automotive audio

    DEFF Research Database (Denmark)

    Christensen, Flemming; Geoff, Martin; Minnaar, Pauli

    2005-01-01

    This paper describes a system for simulating automotive audio through headphones for the purposes of conducting listening experiments in the laboratory. The system is based on binaural technology and consists of a component for reproducing the sound of the audio system itself and a component...

  8. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  9. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  10. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  11. MP3 audio-editing software for the department of radiology

    International Nuclear Information System (INIS)

    Hong Qingfen; Sun Canhui; Li Ziping; Meng Quanfei; Jiang Li

    2006-01-01

    Objective: To evaluate the MP3 audio-editing software in the daily work in the department of radiology. Methods: The audio content of daily consultation seminar, held in the department of radiology every morning, was recorded and converted into MP3 audio format by a computer integrated recording device. The audio data were edited, archived, and eventually saved in the computer memory storage media, which was experimentally replayed and applied in the research or teaching. Results: MP3 audio-editing was a simple process and convenient for saving and searching the data. The record could be easily replayed. Conclusion: MP3 audio-editing perfectly records and saves the contents of consultation seminar, and has replaced the conventional hand writing notes. It is a valuable tool in both research and teaching in the department. (authors)

  12. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  13. Use of Effective Audio in E-learning Courseware

    OpenAIRE

    Ray, Kisor

    2015-01-01

    E-Learning uses electronic media, information & communication technologies to provide education to the masses. E-learning deliver hypertext, text, audio, images, animation and videos using desktop standalone computer, local area network based intranet and internet based contents. While producing an e-learning content or course-ware, a major decision making factor is whether to use audio for the benefit of the end users. Generally, three types of audio can be used in e-learning: narration, mus...

  14. Tune in the Net with RealAudio.

    Science.gov (United States)

    Buchanan, Larry

    1997-01-01

    Describes how to connect to the RealAudio Web site to download a player that provides sound from Web pages to the computer through streaming technology. Explains hardware and software requirements and provides addresses for other RealAudio Web sites are provided, including weather information and current news. (LRW)

  15. Relationship between age at menarche and exposure to sexual content in audio-visual media and other factors in Islamic junior high school girls

    Directory of Open Access Journals (Sweden)

    Tity Wulandari

    2018-01-01

    Full Text Available Background In recent decades, girls have experienced menarche at earlier ages, which may have negative effects on health. Exposure to audio-visual media and other factors may influence the age at menarche, although past studies have produced inconsistent results. Objective To assess for relationships between the age at menarche and audio-visual media exposure, socio-economic status, nutritional status, physical activity, and psychosocial dysfunction in adolescent girls. Methods This cross-sectional study was conducted from August to October 2015 in students from two integrated Islamic junior high schools in Medan, North Sumatera. There were 216 students who met the inclusion criteria: aged 10-16 years and experienced menarche. They were asked to fill out questionnaires that had been previously validated, regarding their history of exposure to audio-visual media, physical activity, and psychosocial dysfunction. The data were analyzed by Chi-square and Fisher’s exact tests in order to assess for relationships between audio-visual media exposure and other potential factors with the age at menarche. Results Of 261 female students at the two schools, 216 had undergone menarche, with a mean age at menarche of 11.6 (SD 1.13 years. There was no significant relationship between age at menarche and audio-visual media exposure (P=0.68. Also, there were no significant relationships between factors such as socio-economic and psychosocial status with age at menarche (P=0.64 and P=0.28, respectively. However, there were significant relationships between earlier age at menarche and overweight/obese nutritional status (P=0.02 as well as low physical activity (P=0.01. Multivariate logistic regression analysis showed that low physical activity had the strongest influence on early menarche (RP=2.40; 95%CI 0.92 to 6.24. Conclusion Age at menarche is not significantly associated with sexual content of audio-visual media exposure. However, there were significant

  16. Portable audio electronics for impedance-based measurements in microfluidics

    International Nuclear Information System (INIS)

    Wood, Paul; Sinton, David

    2010-01-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1–50 mM), flow rate (2–120 µL min −1 ) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ∼10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems. (technical note)

  17. Sinusoidal Analysis-Synthesis of Audio Using Perceptual Criteria

    Science.gov (United States)

    Painter, Ted; Spanias, Andreas

    2003-12-01

    This paper presents a new method for the selection of sinusoidal components for use in compact representations of narrowband audio. The method consists of ranking and selecting the most perceptually relevant sinusoids. The idea behind the method is to maximize the matching between the auditory excitation pattern associated with the original signal and the corresponding auditory excitation pattern associated with the modeled signal that is being represented by a small set of sinusoidal parameters. The proposed component-selection methodology is shown to outperform the maximum signal-to-mask ratio selection strategy in terms of subjective quality.

  18. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  19. Modified DCTNet for audio signals classification

    Science.gov (United States)

    Xian, Yin; Pu, Yunchen; Gan, Zhe; Lu, Liang; Thompson, Andrew

    2016-10-01

    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to human audio perception than features such as Mel-frequency spectral coefficients (MFSC). We use features extracted by the A-DCTNet as input for classifiers. Experimental results show that the A-DCTNet and Recurrent Neural Networks (RNN) achieve state-of-the-art performance in bird song classification rate, and improve artist identification accuracy in music data. They demonstrate A-DCTNet's applicability to signal processing problems.

  20. Comparative Study of Si and SiC MOSFETs for High Voltage Class D Audio Amplifiers

    DEFF Research Database (Denmark)

    Nielsen, Dennis; Knott, Arnold; Andersen, Michael A. E.

    2014-01-01

    Silicon (Si) Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs) are traditional utilised in class D audio amplifiers. It has been proposed to replace the traditional inefficient electrodynamic transducer with the electrostatic transducer. This imposes new high voltage requirements...... on the MOSFETs of class D amplifiers, and significantly reduces the selection of suitable MOSFETs. As a consequence it is investigated, if Silicon-Carbide (SiC) MOSFETs could represent a valid alternative. The theory of pulse timing errors are revisited for the application of high voltage and capactive loaded...... class D amplifiers. It is shown, that SiC MOSFETs can compete with Si MSOFETs in terms of THD. Validation is done using simulations and a 500 V amplifier driving a 100 nF load. THD+N below 0.3 % is reported...

  1. DOA Estimation of Audio Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Heusdens, Richard

    2016-01-01

    Reverberation is well-known to have a detrimental impact on many localization methods for audio sources. We address this problem by imposing a model for the early reflections as well as a model for the audio source itself. Using these models, we propose two iterative localization methods...... that estimate the direction-of-arrival (DOA) of both the direct path of the audio source and the early reflections. In these methods, the contribution of the early reflections is essentially subtracted from the signal observations before localization of the direct path component, which may reduce the estimation...

  2. Portable Audio Design

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2014-01-01

    attention to the specific genre; a grasping of the complex relationship between site and time, the actual and the virtual; and getting aquatint with the specific site’s soundscape by approaching it both intuitively and systematically. These steps will finally lead to an audio production that not only...

  3. AUDIO CRYPTANALYSIS- AN APPLICATION OF SYMMETRIC KEY CRYPTOGRAPHY AND AUDIO STEGANOGRAPHY

    Directory of Open Access Journals (Sweden)

    Smita Paira

    2016-09-01

    Full Text Available In the recent trend of network and technology, “Cryptography” and “Steganography” have emerged out as the essential elements of providing network security. Although Cryptography plays a major role in the fabrication and modification of the secret message into an encrypted version yet it has certain drawbacks. Steganography is the art that meets one of the basic limitations of Cryptography. In this paper, a new algorithm has been proposed based on both Symmetric Key Cryptography and Audio Steganography. The combination of a randomly generated Symmetric Key along with LSB technique of Audio Steganography sends a secret message unrecognizable through an insecure medium. The Stego File generated is almost lossless giving a 100 percent recovery of the original message. This paper also presents a detailed experimental analysis of the algorithm with a brief comparison with other existing algorithms and a future scope. The experimental verification and security issues are promising.

  4. Adaptive DCTNet for Audio Signal Classification

    OpenAIRE

    Xian, Yin; Pu, Yunchen; Gan, Zhe; Lu, Liang; Thompson, Andrew

    2016-01-01

    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to h...

  5. Fall Detection Using Smartphone Audio Features.

    Science.gov (United States)

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  6. Audio-Tutorial Instruction: A Strategy For Teaching Introductory College Geology.

    Science.gov (United States)

    Fenner, Peter; Andrews, Ted F.

    The rationale of audio-tutorial instruction is discussed, and the history and development of the audio-tutorial botany program at Purdue University is described. Audio-tutorial programs in geology at eleven colleges and one school are described, illustrating several ways in which programs have been developed and integrated into courses. Programs…

  7. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling.

    Science.gov (United States)

    Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S

    2016-04-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  8. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling

    Directory of Open Access Journals (Sweden)

    Bo Xiao

    2016-04-01

    Full Text Available Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist’s language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  9. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  10. "Are You Listening Please?" The Advantages of Electronic Audio Feedback Compared to Written Feedback

    Science.gov (United States)

    Lunt, Tom; Curran, John

    2010-01-01

    Feedback on students' work is, probably, one of the most important aspects of learning, yet students' report, according to the National Union of Students (NUS) Survey of 2008, unhappiness with the feedback process. Students were unhappy with the quality, detail and timing of feedback. This paper examines the benefits of using audio, as opposed to…

  11. Audio Description as a Pedagogical Tool

    Directory of Open Access Journals (Sweden)

    Georgina Kleege

    2015-05-01

    Full Text Available Audio description is the process of translating visual information into words for people who are blind or have low vision. Typically such description has focused on films, museum exhibitions, images and video on the internet, and live theater. Because it allows people with visual impairments to experience a variety of cultural and educational texts that would otherwise be inaccessible, audio description is a mandated aspect of disability inclusion, although it remains markedly underdeveloped and underutilized in our classrooms and in society in general. Along with increasing awareness of disability, audio description pushes students to practice close reading of visual material, deepen their analysis, and engage in critical discussions around the methodology, standards and values, language, and role of interpretation in a variety of academic disciplines. We outline a few pedagogical interventions that can be customized to different contexts to develop students' writing and critical thinking skills through guided description of visual material.

  12. Extracting meaning from audio signals - a machine learning approach

    DEFF Research Database (Denmark)

    Larsen, Jan

    2007-01-01

    * Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression......* Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression...

  13. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  14. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  15. Efficiency in audio processing : filter banks and transcoding

    NARCIS (Netherlands)

    Lee, Jun Wei

    2007-01-01

    Audio transcoding is the conversion of digital audio from one compressed form A to another compressed form B, where A and B have different compression properties, such as a different bit-rate, sampling frequency or compression method. This is typically achieved by decoding A to an intermediate

  16. Content Discovery from Composite Audio : An unsupervised approach

    NARCIS (Netherlands)

    Lu, L.

    2009-01-01

    In this thesis, we developed and assessed a novel robust and unsupervised framework for semantic inference from composite audio signals. We focused on the problem of detecting audio scenes and grouping them into meaningful clusters. Our approach addressed all major steps in a general process of

  17. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    Science.gov (United States)

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  18. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  19. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Directory of Open Access Journals (Sweden)

    Albertus C. den Brinker

    2007-01-01

    Full Text Available This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC.

  20. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Science.gov (United States)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  1. Can audio coached 4D CT emulate free breathing during the treatment course?

    DEFF Research Database (Denmark)

    Persson, Gitte F; Nygaard, Ditte E; Olsen, Mikael

    2008-01-01

    BACKGROUND: The image quality of 4DCT depends on breathing regularity. Respiratory audio coaching may improve regularity and reduce motion artefacts. We question the safety of coached planning 4DCT without coaching during treatment. We investigated the possibility of coaching to a more stable...... breathing without changing the breathing amplitude. The interfraction variation of the breathing cycle amplitude in free and coached breathing was studied as well as the possible impact of fatigue on longer coaching sessions. METHODS: Thirteen volunteers completed respiratory audio coaching on 3 days within...... a 2 week period. An external marker system monitoring the motion of the thoraco-abdominal wall was used to track the respiration. On all days, free breathing and two coached breathing curves were recorded. We assumed that free versus coached breathing from day 1 (reference session) simulated breathing...

  2. Interactive 3D audio: Enhancing awareness of details in immersive soundscapes?

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Schwartz, Stephen; Larsen, Jan

    2012-01-01

    Spatial audio and the possibility of interacting with the audio environment is thought to increase listeners' attention to details in a soundscape. This work examines if interactive 3D audio enhances listeners' ability to recall details in a soundscape. Nine different soundscapes were constructed...

  3. Personalized Audio Systems - a Bayesian Approach

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Hansen, Toke Jansen

    2013-01-01

    Modern audio systems are typically equipped with several user-adjustable parameters unfamiliar to most users listening to the system. To obtain the best possible setting, the user is forced into multi-parameter optimization with respect to the users's own objective and preference. To address this......, the present paper presents a general inter-active framework for personalization of such audio systems. The framework builds on Bayesian Gaussian process regression in which a model of the users's objective function is updated sequentially. The parameter setting to be evaluated in a given trial is selected...

  4. IELTS speaking instruction through audio/voice conferencing

    Directory of Open Access Journals (Sweden)

    Hamed Ghaemi

    2012-02-01

    Full Text Available The currentstudyaimsatinvestigatingtheimpactofAudio/Voiceconferencing,asanewapproachtoteaching speaking, on the speakingperformanceand/orspeakingband score ofIELTScandidates.Experimentalgroupsubjectsparticipated in an audio conferencing classwhile those of the control group enjoyed attending in a traditional IELTS Speakingclass. At the endofthestudy,allsubjectsparticipatedinanIELTSExaminationheldonNovemberfourthin Tehran,Iran.To compare thegroupmeansforthestudy,anindependentt-testanalysiswasemployed.Thedifferencebetween experimental and control groupwasconsideredtobestatisticallysignificant(P<0.01.Thatisthecandidates in experimental group have outperformed the ones in control group in IELTS Speaking test scores.

  5. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  6. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  7. WebGL and web audio software lightweight components for multimedia education

    Science.gov (United States)

    Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław

    2017-08-01

    The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.

  8. All About Audio Equalization: Solutions and Frontiers

    Directory of Open Access Journals (Sweden)

    Vesa Välimäki

    2016-05-01

    Full Text Available Audio equalization is a vast and active research area. The extent of research means that one often cannot identify the preferred technique for a particular problem. This review paper bridges those gaps, systemically providing a deep understanding of the problems and approaches in audio equalization, their relative merits and applications. Digital signal processing techniques for modifying the spectral balance in audio signals and applications of these techniques are reviewed, ranging from classic equalizers to emerging designs based on new advances in signal processing and machine learning. Emphasis is placed on putting the range of approaches within a common mathematical and conceptual framework. The application areas discussed herein are diverse, and include well-defined, solvable problems of filter design subject to constraints, as well as newly emerging challenges that touch on problems in semantics, perception and human computer interaction. Case studies are given in order to illustrate key concepts and how they are applied in practice. We also recommend preferred signal processing approaches for important audio equalization problems. Finally, we discuss current challenges and the uncharted frontiers in this field. The source code for methods discussed in this paper is made available at https://code.soundsoftware.ac.uk/projects/allaboutaudioeq.

  9. Can audio recording improve patients' recall of outpatient consultations?

    DEFF Research Database (Denmark)

    Wolderslund, Maiken; Kofoed, Poul-Erik; Axboe, Mette

    Introduction In order to give patients possibility to listen to their consultation again, we have designed a system which gives the patients access to digital audio recordings of their consultations. An Interactive Voice Response platform enables the audio recording and gives the patients access...... and those who have not (control).The audio recordings and the interviews are coded according to six themes: Test results, Treatment, Risks, Future tests, Advice and Plan. Afterwards the extent of patients recall is assessed by comparing the accuracy of the patient’s statements (interview...

  10. “Wrapping” X3DOM around Web Audio API

    Directory of Open Access Journals (Sweden)

    Andreas Stamoulias

    2015-12-01

    Full Text Available Spatial sound has a conceptual role in the Web3D environments, due to highly realism scenes that can provide. Lately the efforts are concentrated on the extension of the X3D/ X3DOM through spatial sound attributes. This paper presents a novel method for the introduction of spatial sound components in the X3DOM framework, based on X3D specification and Web Audio API. The proposed method incorporates the introduction of enhanced sound nodes for X3DOM which are derived by the implementation of the X3D standard components, enriched with accessional features of Web Audio API. Moreover, several examples-scenarios developed for the evaluation of our approach. The implemented examples established the achievability of new registered nodes in X3DOM, for spatial sound characteristics in Web3D virtual worlds.

  11. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  12. An Interactive Concert Program Based on Infrared Watermark and Audio Synthesis

    Science.gov (United States)

    Wang, Hsi-Chun; Lee, Wen-Pin Hope; Liang, Feng-Ju

    The objective of this research is to propose a video/audio system which allows the user to listen the typical music notes in the concert program under infrared detection. The system synthesizes audio with different pitches and tempi in accordance with the encoded data in a 2-D barcode embedded in the infrared watermark. The digital halftoning technique has been used to fabricate the infrared watermark composed of halftone dots by both amplitude modulation (AM) and frequency modulation (FM). The results show that this interactive system successfully recognizes the barcode and synthesizes audio under infrared detection of a concert program which is also valid for human observation of the contents. This interactive video/audio system has greatly expanded the capability of the printout paper to audio display and also has many potential value-added applications.

  13. Audio Networking in the Music Industry

    Directory of Open Access Journals (Sweden)

    Glebs Kuzmics

    2018-01-01

    Full Text Available This paper surveys the rôle of computer networking technologies in the music industry. A comparison of their relevant technologies, their defining advantages and disadvantages; analyses and discussion of the situation in the market of network enabled audio products followed by a discussion of different devices are presented. The idea of replacing a proprietary solution with open-source and freeware software programs has been chosen as the fundamental concept of this research. The technologies covered include: native IEEE AVnu Alliance Audio Video Bridging (AVB, CobraNet®, Audinate Dante™ and Harman BLU Link.

  14. Four-quadrant flyback converter for direct audio power amplification

    DEFF Research Database (Denmark)

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    This paper presents a bidirectional, four-quadrant flyback converter for use in direct audio power amplification. When compared to the standard Class-D switching audio power amplifier with a separate power supply, the proposed four-quadrant flyback converter provides simple solution with better...

  15. Audio Technology and Mobile Human Computer Interaction

    DEFF Research Database (Denmark)

    Chamberlain, Alan; Bødker, Mads; Hazzard, Adrian

    2017-01-01

    Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design...... and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities....

  16. Semantic Labeling of Nonspeech Audio Clips

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ma

    2010-01-01

    Full Text Available Human communication about entities and events is primarily linguistic in nature. While visual representations of information are shown to be highly effective as well, relatively little is known about the communicative power of auditory nonlinguistic representations. We created a collection of short nonlinguistic auditory clips encoding familiar human activities, objects, animals, natural phenomena, machinery, and social scenes. We presented these sounds to a broad spectrum of anonymous human workers using Amazon Mechanical Turk and collected verbal sound labels. We analyzed the human labels in terms of their lexical and semantic properties to ascertain that the audio clips do evoke the information suggested by their pre-defined captions. We then measured the agreement with the semantically compatible labels for each sound clip. Finally, we examined which kinds of entities and events, when captured by nonlinguistic acoustic clips, appear to be well-suited to elicit information for communication, and which ones are less discriminable. Our work is set against the broader goal of creating resources that facilitate communication for people with some types of language loss. Furthermore, our data should prove useful for future research in machine analysis/synthesis of audio, such as computational auditory scene analysis, and annotating/querying large collections of sound effects.

  17. Unsupervised topic modelling on South African parliament audio data

    CSIR Research Space (South Africa)

    Kleynhans, N

    2014-11-01

    Full Text Available Using a speech recognition system to convert spoken audio to text can enable the structuring of large collections of spoken audio data. A convenient means to summarise or cluster spoken data is to identify the topic under discussion. There are many...

  18. Classifying laughter and speech using audio-visual feature prediction

    NARCIS (Netherlands)

    Petridis, Stavros; Asghar, Ali; Pantic, Maja

    2010-01-01

    In this study, a system that discriminates laughter from speech by modelling the relationship between audio and visual features is presented. The underlying assumption is that this relationship is different between speech and laughter. Neural networks are trained which learn the audio-to-visual and

  19. Sistema de adquisición y procesamiento de audio

    OpenAIRE

    Pérez Segurado, Rubén

    2015-01-01

    El objetivo de este proyecto es el diseño y la implementación de una plataforma para un sistema de procesamiento de audio. El sistema recibirá una señal de audio analógica desde una fuente de audio, permitirá realizar un tratamiento digital de dicha señal y generará una señal procesada que se enviará a unos altavoces externos. Para la realización del sistema de procesamiento se empleará: - Un dispositivo FPGA de Lattice, modelo MachX02-7000-HE, en la cual estarán todas la...

  20. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  1. ENERGY STAR Certified Audio Video

    Data.gov (United States)

    U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of...

  2. TECHNICAL NOTE: Portable audio electronics for impedance-based measurements in microfluidics

    Science.gov (United States)

    Wood, Paul; Sinton, David

    2010-08-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1-50 mM), flow rate (2-120 µL min-1) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ~10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems.

  3. Design of batch audio/video conversion platform based on JavaEE

    Science.gov (United States)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  4. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  5. Perancangan Sistem Audio Mobil Berbasiskan Sistem Pakar dan Web

    Directory of Open Access Journals (Sweden)

    Djunaidi Santoso

    2011-12-01

    Full Text Available Designing car audio that fits user’s needs is a fun activity. However, the design often consumes more time and costly since it should be consulted to the experts several times. For easy access to information in designing a car audio system as well as error prevention, an car audio system based on expert system and web is designed for those who do not have sufficient time and expense to consult directly to experts. This system consists of tutorial modules designed using the HyperText Preprocessor (PHP and MySQL as database. This car audio system design is evaluated uses black box testing method which focuses on the functional needs of the application. Tests are performed by providing inputs and produce outputs corresponding to the function of each module. The test results prove the correspondence between input and output, which means that the program meet the initial goals of the design. 

  6. An introduction to audio content analysis applications in signal processing and music informatics

    CERN Document Server

    Lerch, Alexander

    2012-01-01

    "With the proliferation of digital audio distribution over digital media, audio content analysis is fast becoming a requirement for designers of intelligent signal-adaptive audio processing systems. Written by a well-known expert in the field, this book provides quick access to different analysis algorithms and allows comparison between different approaches to the same task, making it useful for newcomers to audio signal processing and industry experts alike. A review of relevant fundamentals in audio signal processing, psychoacoustics, and music theory, as well as downloadable MATLAB files are also included"--

  7. BAT: An open-source, web-based audio events annotation tool

    OpenAIRE

    Blai Meléndez-Catalan, Emilio Molina, Emilia Gómez

    2017-01-01

    In this paper we present BAT (BMAT Annotation Tool), an open-source, web-based tool for the manual annotation of events in audio recordings developed at BMAT (Barcelona Music and Audio Technologies). The main feature of the tool is that it provides an easy way to annotate the salience of simultaneous sound sources. Additionally, it allows to define multiple ontologies to adapt to multiple tasks and offers the possibility to cross-annotate audio data. Moreover, it is easy to install and deploy...

  8. On the Use of Memory Models in Audio Features

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual spectral flux...

  9. Audio Teleconferencing: Low Cost Technology for External Studies Networking.

    Science.gov (United States)

    Robertson, Bill

    1987-01-01

    This discussion of the benefits of audio teleconferencing for distance education programs and for business and government applications focuses on the recent experience of Canadian educational users. Four successful operating models and their costs are reviewed, and it is concluded that audio teleconferencing is cost efficient and educationally…

  10. Real-Time Audio Processing on the T-CREST Multicore Platform

    DEFF Research Database (Denmark)

    Ausin, Daniel Sanz; Pezzarossa, Luca; Schoeberl, Martin

    2017-01-01

    of the audio signal. This paper presents a real-time multicore audio processing system based on the T-CREST platform. T-CREST is a time-predictable multicore processor for real-time embedded systems. Multiple audio effect tasks have been implemented, which can be connected together in different configurations...... forming sequential and parallel effect chains, and using a network-onchip for intercommunication between processors. The evaluation of the system shows that real-time processing of multiple effect configurations is possible, and that the estimation and control of latency ensures real-time behavior.......Multicore platforms are nowadays widely used for audio processing applications, due to the improvement of computational power that they provide. However, some of these systems are not optimized for temporally constrained environments, which often leads to an undesired increase in the latency...

  11. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  12. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  13. Effect of Audio Coaching on Correlation of Abdominal Displacement With Lung Tumor Motion

    International Nuclear Information System (INIS)

    Nakamura, Mitsuhiro; Narita, Yuichiro; Matsuo, Yukinori; Narabayashi, Masaru; Nakata, Manabu; Sawada, Akira; Mizowaki, Takashi; Nagata, Yasushi; Hiraoka, Masahiro

    2009-01-01

    Purpose: To assess the effect of audio coaching on the time-dependent behavior of the correlation between abdominal motion and lung tumor motion and the corresponding lung tumor position mismatches. Methods and Materials: Six patients who had a lung tumor with a motion range >8 mm were enrolled in the present study. Breathing-synchronized fluoroscopy was performed initially without audio coaching, followed by fluoroscopy with recorded audio coaching for multiple days. Two different measurements, anteroposterior abdominal displacement using the real-time positioning management system and superoinferior (SI) lung tumor motion by X-ray fluoroscopy, were performed simultaneously. Their sequential images were recorded using one display system. The lung tumor position was automatically detected with a template matching technique. The relationship between the abdominal and lung tumor motion was analyzed with and without audio coaching. Results: The mean SI tumor displacement was 10.4 mm without audio coaching and increased to 23.0 mm with audio coaching (p < .01). The correlation coefficients ranged from 0.89 to 0.97 with free breathing. Applying audio coaching, the correlation coefficients improved significantly (range, 0.93-0.99; p < .01), and the SI lung tumor position mismatches became larger in 75% of all sessions. Conclusion: Audio coaching served to increase the degree of correlation and make it more reproducible. In addition, the phase shifts between tumor motion and abdominal displacement were improved; however, all patients breathed more deeply, and the SI lung tumor position mismatches became slightly larger with audio coaching than without audio coaching.

  14. APLIKASI MEDIA AUDIO-VISUAL DALAM PEMBELAJARAN SPEAKING SKILL DENGAN PENDEKATAN AUDIOLINGUAL: Studi Kasus di MAN Batang

    Directory of Open Access Journals (Sweden)

    Slamet Untung

    2012-10-01

    Full Text Available The research to study the application of audio and visual medium in order to learn speaking skill by audiolingual approach is a good contribution to educational world of senior high school and the Islamic one, particularly, in finding a way to improving the learning component relating directly to the medium and method of learning speaking skill. This research is to find out its significance and relevance. The main variable of this research includes the whole activities of the application of audio and visual medium in learning speaking skill by audio-lingual approach. The data were collected through observation, interview, questionnaire and documentation. This research took place in state Islamic senior high school of Batang in Central Java. The result shows that the application helps the students to speak English correctly and accurately and stresses the message of the speaking skill learning.

  15. Four-quadrant flyback converter for direct audio power amplification

    OpenAIRE

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    This paper presents a bidirectional, four-quadrant flyback converter for use in direct audio power amplification. When compared to the standard Class-D switching audio power amplifier with a separate power supply, the proposed four-quadrant flyback converter provides simple solution with better efficiency, higher level of integration and lower component count.

  16. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  17. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  18. Semantic Analysis of Multimedial Information Usign Both Audio and Visual Clues

    Directory of Open Access Journals (Sweden)

    Andrej Lukac

    2008-01-01

    Full Text Available Nowadays, there is a lot of information in databases (text, audio/video form, etc.. It is important to be able to describe this data for better orientation in them. It is necessary to apply audio/video properties, which are used for metadata management, segmenting the document into semantically meaningful units, classifying each unit into a predefined scene type, indexing, summarizing the document for efficient retrieval and browsing. Data can be used for system that automatically searches for a specific person in a sequence also for special video sequences. Audio/video properties are presented by descriptors and description schemes. There are many features that can be used to characterize multimedial signals. We can analyze audio and video sequences jointly or considered them completely separately. Our aim is oriented to possibilities of combining multimedial features. Focus is direct into discussion programs, because there are more decisions how to combine audio features with video sequences.

  19. Audio computer-assisted self interview compared to traditional interview in an HIV-related behavioral survey in Vietnam.

    Science.gov (United States)

    Le, Linh Cu; Vu, Lan T H

    2012-10-01

    Globally, population surveys on HIV/AIDS and other sensitive topics have been using audio computer-assisted self interview for many years. This interview technique, however, is still new to Vietnam and little is known about its application and impact in general population surveys. One plausible hypothesis is that residents of Vietnam interviewed using this technique may provide a higher response rate and be more willing to reveal their true behaviors than if interviewed with traditional methods. This study aims to compare audio computer-assisted self interview with traditional face-to-face personal interview and self-administered interview with regard to rates of refusal and affirmative responses to questions on sensitive topics related to HIV/AIDS. In June 2010, a randomized study was conducted in three cities (Ha Noi, Da Nan and Can Tho), using a sample of 4049 residents aged 15 to 49 years. Respondents were randomly assigned to one of three interviewing methods: audio computer-assisted self interview, personal face-to-face interview, and self-administered paper interview. Instead of providing answers directly to interviewer questions as with traditional methods, audio computer-assisted self-interview respondents read the questions displayed on a laptop screen, while listening to the questions through audio headphones, then entered responses using a laptop keyboard. A MySQL database was used for data management and SPSS statistical package version 18 used for data analysis with bivariate and multivariate statistical techniques. Rates of high risk behaviors and mean values of continuous variables were compared for the three data collection methods. Audio computer-assisted self interview showed advantages over comparison techniques, achieving lower refusal rates and reporting higher prevalence of some sensitive and risk behaviors (perhaps indication of more truthful answers). Premarital sex was reported by 20.4% in the audio computer-assisted self-interview survey

  20. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  1. Local Control of Audio Environment: A Review of Methods and Applications

    Directory of Open Access Journals (Sweden)

    Jussi Kuutti

    2014-02-01

    Full Text Available The concept of a local audio environment is to have sound playback locally restricted such that, ideally, adjacent regions of an indoor or outdoor space could exhibit their own individual audio content without interfering with each other. This would enable people to listen to their content of choice without disturbing others next to them, yet, without any headphones to block conversation. In practice, perfect sound containment in free air cannot be attained, but a local audio environment can still be satisfactorily approximated using directional speakers. Directional speakers may be based on regular audible frequencies or they may employ modulated ultrasound. Planar, parabolic, and array form factors are commonly used. The directivity of a speaker improves as its surface area and sound frequency increases, making these the main design factors for directional audio systems. Even directional speakers radiate some sound outside the main beam, and sound can also reflect from objects. Therefore, directional speaker systems perform best when there is enough ambient noise to mask the leaking sound. Possible areas of application for local audio include information and advertisement audio feed in commercial facilities, guiding and narration in museums and exhibitions, office space personalization, control room messaging, rehabilitation environments, and entertainment audio systems.

  2. Balancing Audio

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    2016-01-01

    is not thoroughly understood. In this paper I treat balance as a metaphor that we use to reason about several different actions in music production, such as adjusting levels, editing the frequency spectrum or the spatiality of the recording. This study is based on an exploration of a linguistic corpus of sound......This paper explores the concept of balance in music production and examines the role of conceptual metaphors in reasoning about audio editing. Balance may be the most central concept in record production, however, the way we cognitively understand and respond meaningfully to a mix requiring balance...

  3. Gateway of Sound: Reassessing the Role of Audio Mastering in the Art of Record Production

    Directory of Open Access Journals (Sweden)

    Carlo Nardi

    2014-06-01

    Full Text Available Audio mastering, notwithstanding an apparent lack of scholarly attention, is a crucial gateway between production and consumption and, as such, is worth further scrutiny, especially in music genres like house or techno, which place great emphasis on sound production qualities. In this article, drawing on personal interviews with mastering engineers and field research in mastering studios in Italy and Germany, I investigate the practice of mastering engineering, paying close attention to the negotiation of techniques and sound aesthetics in relation to changes in the industry formats and, in particular, to the growing shift among DJs from vinyl to compressed digital formats. I then discuss the specificity of audio mastering in relation to EDM, insofar as DJs and controllerists conceive of the master, rather than as a finished product destined to listening, as raw material that can be reworked in performance.

  4. Parametric Audio Based Decoder and Music Synthesizer for Mobile Applications

    NARCIS (Netherlands)

    Oomen, A.W.J.; Szczerba, M.Z.; Therssen, D.

    2011-01-01

    This paper reviews parametric audio coders and discusses novel technologies introduced in a low-complexity, low-power consumption audiodecoder and music synthesizer platform developed by the authors. Thedecoder uses parametric coding scheme based on the MPEG-4 Parametric Audio standard. In order to

  5. Automated Speech and Audio Analysis for Semantic Access to Multimedia

    NARCIS (Netherlands)

    Jong, F.M.G. de; Ordelman, R.; Huijbregts, M.

    2006-01-01

    The deployment and integration of audio processing tools can enhance the semantic annotation of multimedia content, and as a consequence, improve the effectiveness of conceptual access tools. This paper overviews the various ways in which automatic speech and audio analysis can contribute to

  6. Automated speech and audio analysis for semantic access to multimedia

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Ordelman, Roeland J.F.; Huijbregts, M.A.H.; Avrithis, Y.; Kompatsiaris, Y.; Staab, S.; O' Connor, N.E.

    2006-01-01

    The deployment and integration of audio processing tools can enhance the semantic annotation of multimedia content, and as a consequence, improve the effectiveness of conceptual access tools. This paper overviews the various ways in which automatic speech and audio analysis can contribute to

  7. Improving audio chord transcription by exploiting harmonic and metric knowledge

    NARCIS (Netherlands)

    de Haas, W.B.; Rodrigues Magalhães, J.P.; Wiering, F.

    2012-01-01

    We present a new system for chord transcription from polyphonic musical audio that uses domain-specific knowledge about tonal harmony and metrical position to improve chord transcription performance. Low-level pulse and spectral features are extracted from an audio source using the Vamp plugin

  8. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  9. Extraction, Mapping, and Evaluation of Expressive Acoustic Features for Adaptive Digital Audio Effects

    DEFF Research Database (Denmark)

    Holfelt, Jonas; Csapo, Gergely; Andersson, Nikolaj Schwab

    2017-01-01

    This paper describes the design and implementation of a real-time adaptive digital audio effect with an emphasis on using expressive audio features that control effect param- eters. Research in adaptive digital audio effects is cov- ered along with studies about expressivity and important...

  10. Let Their Voices Be Heard! Building a Multicultural Audio Collection.

    Science.gov (United States)

    Tucker, Judith Cook

    1992-01-01

    Discusses building a multicultural audio collection for a library. Gives some guidelines about selecting materials that really represent different cultures. Audio materials that are considered fall roughly into the categories of children's stories, didactic materials, oral histories, poetry and folktales, and music. The goal is an authentic…

  11. Hysteretic self-oscillating bandpass current mode control for Class D audio amplifiers driving capacitive transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis; Knott, Arnold; Andersen, Michael A. E.

    2013-01-01

    A hysteretic self-oscillating bandpass current mode control (BPCM) scheme for Class D audio amplifiers driving capacitive transducers are presented. The scheme provides excellent stability margins and low distortion over a wide range of operating conditions. Small-signal behavior of the amplifier...... the rules of electrostatics have been known as very interesting alternatives to the traditional inefficient electrodynamic transducers. When driving capacitive transducers from a Class D audio amplifier the high impedance nature of the load represents a key challenge. The BPCM control scheme ensures a flat...

  12. 106-17 Telemetry Standards Digitized Audio Telemetry Standard Chapter 5

    Science.gov (United States)

    2017-07-01

    Digitized Audio Telemetry Standard 5.1 General This chapter defines continuously variable slope delta (CVSD) modulation as the standard for digitizing...audio signal. The CVSD modulator is, in essence , a 1-bit analog-to-digital converter. The output of this 1-bit encoder is a serial bit stream, where

  13. Multilevel inverter based class D audio amplifier for capacitive transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis; Knott, Arnold; Andersen, Michael A. E.

    2014-01-01

    The reduced semiconductor voltage stress makes the multilevel inverters especially interesting, when driving capacitive transducers for audio applications. A ± 300 V flying capacitor class D audio amplifier driving a 100 nF load in the midrange region of 0.1-3.5 kHz with Total Harmonic Distortion...

  14. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  15. El Digital Audio Tape Recorder. Contra autores y creadores

    Directory of Open Access Journals (Sweden)

    Jun Ono

    2015-01-01

    Full Text Available La llamada "DAT" (abreviatura por "digital audio tape recorder" / grabadora digital de audio ha recibido cobertura durante mucho tiempo en los medios masivos de Japón y otros países, como un producto acústico electrónico nuevo y controversial de la industria japonesa de artefactos electrónicos. ¿Qué ha pasado con el objeto de esta controversia?

  16. Haptic and Audio Interaction Design

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 5th International Workshop on Haptic and Audio Interaction Design, HAID 2010 held in Copenhagen, Denmark, in September 2010. The 21 revised full papers presented were carefully reviewed and selected for inclusion in the book. The papers are or...

  17. Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

    Directory of Open Access Journals (Sweden)

    Warrick Roseboom

    2011-04-01

    Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.

  18. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    Science.gov (United States)

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  19. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  20. The Single- and Multichannel Audio Recordings Database (SMARD)

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Jesper Rindom; Jensen, Søren Holdt

    2014-01-01

    A new single- and multichannel audio recordings database (SMARD) is presented in this paper. The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four...... different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results....

  1. Nonlinear dynamic macromodeling techniques for audio systems

    Science.gov (United States)

    Ogrodzki, Jan; Bieńkowski, Piotr

    2015-09-01

    This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.

  2. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  3. Conflicting audio-haptic feedback in physically based simulation of walking sounds

    DEFF Research Database (Denmark)

    Turchet, Luca; Serafin, Stefania; Dimitrov, Smilen

    2010-01-01

    We describe an audio-haptic experiment conducted using a system which simulates in real-time the auditory and haptic sensation of walking on different surfaces. The system is based on physical models, that drive both the haptic and audio synthesizers, and a pair of shoes enhanced with sensors...... and actuators. Such experiment was run to examine the ability of subjects to recognize the different surfaces with both coherent and incoherent audio-haptic stimuli. Results show that in this kind of tasks the auditory modality is dominant on the haptic one....

  4. Audio-vocal interaction in single neurons of the monkey ventrolateral prefrontal cortex.

    Science.gov (United States)

    Hage, Steffen R; Nieder, Andreas

    2015-05-06

    Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2015 the authors 0270-6474/15/357030-11$15.00/0.

  5. Effects of Hearing Protection Device Attenuation on Unmanned Aerial Vehicle (UAV) Audio Signatures

    Science.gov (United States)

    2016-03-01

    UAV ) Audio Signatures by Melissa Bezandry, Adrienne Raglin, and John Noble Approved for public release; distribution...Research Laboratory Effects of Hearing Protection Device Attenuation on Unmanned Aerial Vehicle ( UAV ) Audio Signatures by Melissa Bezandry...Aerial Vehicle ( UAV ) Audio Signatures 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Melissa Bezandry

  6. ESA personal communications and digital audio broadcasting systems based on non-geostationary satellites

    Science.gov (United States)

    Logalbo, P.; Benedicto, J.; Viola, R.

    1993-01-01

    Personal Communications and Digital Audio Broadcasting are two new services that the European Space Agency (ESA) is investigating for future European and Global Mobile Satellite systems. ESA is active in promoting these services in their various mission options including non-geostationary and geostationary satellite systems. A Medium Altitude Global Satellite System (MAGSS) for global personal communications at L and S-band, and a Multiregional Highly inclined Elliptical Orbit (M-HEO) system for multiregional digital audio broadcasting at L-band are described. Both systems are being investigated by ESA in the context of future programs, such as Archimedes, which are intended to demonstrate the new services and to develop the technology for future non-geostationary mobile communication and broadcasting satellites.

  7. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  8. Technical Evaluation Report 31: Internet Audio Products (3/ 3

    Directory of Open Access Journals (Sweden)

    Jim Rudolph

    2004-08-01

    Full Text Available Two contrasting additions to the online audio market are reviewed: iVocalize, a browser-based audio-conferencing software, and Skype, a PC-to-PC Internet telephone tool. These products are selected for review on the basis of their success in gaining rapid popular attention and usage during 2003-04. The iVocalize review emphasizes the product’s role in the development of a series of successful online audio communities – notably several serving visually impaired users. The Skype review stresses the ease with which the product may be used for simultaneous PC-to-PC communication among up to five users. Editor’s Note: This paper serves as an introduction to reports about online community building, and reviews of online products for disabled persons, in the next ten reports in this series. JPB, Series Ed.

  9. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  10. A perceptual model for sinusoidal audio coding based on spectral integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrauch, A.; Heusdens, R.; Jensen, J.; Jensen, S.H.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  11. Audio production principles practical studio applications

    CERN Document Server

    Elmosnino, Stephane

    2018-01-01

    A new and fully practical guide to all of the key topics in audio production, this book covers the entire workflow from pre-production, to recording all kinds of instruments, to mixing theories and tools, and finally to mastering.

  12. Securing Digital Audio using Complex Quadratic Map

    Science.gov (United States)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  13. Training of audio descriptors: the cinematographic aesthetics as basis for the learning of the audio description aesthetics – materials, methods and products

    Directory of Open Access Journals (Sweden)

    Soraya Ferreira Alves

    2016-12-01

    Full Text Available Audio description (AD, a resource used to make theater, cinema, TV, and visual works of art accessible to people with visual impairments, is slowly being implemented in Brazil and demanding qualified professionals. Based on this statement, this article reports the results of a research developed during post-doctoral studies. The study is dedicated to the confrontation of film aesthetics with audio description techniques to check how the knowledge of the former can contribute to audiodescritor training. Through action research, a short film adapted from a Mario de Andrade’s, a Brazilian writer, short story called O Peru de Natal (Christmas Turkey was produced. The film as well as its audio description were carried out involving students and teachers from the discipline Intersemiotic Translation at the State University of Ceará. Thus, we intended to suggest pedagogical procedures generated by the students experiences by evaluating their choices and their implications.

  14. Direct-conversion switching-mode audio power amplifier with active capacitive voltage clamp

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    This paper discusses the advantages and problems when implementing direct energy conversion switching-mode audio power amplifiers. It is shown that the total integration of the power supply and Class D audio power amplifier into one compact direct converter can simplify design, increase efficiency and integration level, reduce product volume and lower its cost. As an example, the principle of operation and the measurements made on a direct-conversion switching-mode audio power amplifier with active capacitive voltage clamp are presented. (au)

  15. Direct-conversion switching-mode audio power amplifier with active capacitive voltage clamp

    DEFF Research Database (Denmark)

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    This paper discusses the advantages and problems when implementing direct energy conversion switching-mode audio power amplifiers. It is shown that the total integration of the power supply and Class D audio power amplifier into one compact direct converter can simplify the design, increase...... efficiency, reduce the product volume and lower its cost. As an example, the principle of operation and the measurements made on a direct-conversion switching-mode audio power amplifier with active capacitive voltage clamp are presented....

  16. An Exploratory Evaluation of User Interfaces for 3D Audio Mixing

    DEFF Research Database (Denmark)

    Gelineck, Steven; Korsgaard, Dannie Michael

    2015-01-01

    The paper presents an exploratory evaluation comparing different versions of a mid-air gesture based interface for mixing 3D audio exploring: (1) how such an interface generally compares to a more traditional physical interface, (2) methods for grabbing/releasing audio channels in mid-air and (3...

  17. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    Science.gov (United States)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio

  18. Precision Scaling of Neural Networks for Efficient Audio Processing

    OpenAIRE

    Ko, Jong Hwan; Fromm, Josh; Philipose, Matthai; Tashev, Ivan; Zarar, Shuayb

    2017-01-01

    While deep neural networks have shown powerful performance in many audio applications, their large computation and memory demand has been a challenge for real-time processing. In this paper, we study the impact of scaling the precision of neural networks on the performance of two common audio processing tasks, namely, voice-activity detection and single-channel speech enhancement. We determine the optimal pair of weight/neuron bit precision by exploring its impact on both the performance and ...

  19. Design guidelines for audio presentation of graphs and tables

    OpenAIRE

    Brown, L.M.; Brewster, S.A.; Ramloll, S.A.; Burton, R.; Riedel, B.

    2003-01-01

    Audio can be used to make visualisations accessible to blind and visually impaired people. The MultiVis Project has carried out research into suitable methods for presenting graphs and tables to blind people through the use of both speech and non-speech audio. This paper presents guidelines extracted from this research. These guidelines will enable designers to implement visualisation systems for blind and visually impaired users, and will provide a framework for researchers wishing to invest...

  20. Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech

    Science.gov (United States)

    Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline

    2010-01-01

    Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…

  1. Revealing the ecological content of long-duration audio-recordings of the environment through clustering and visualisation.

    Science.gov (United States)

    Phillips, Yvonne F; Towsey, Michael; Roe, Paul

    2018-01-01

    Audio recordings of the environment are an increasingly important technique to monitor biodiversity and ecosystem function. While the acquisition of long-duration recordings is becoming easier and cheaper, the analysis and interpretation of that audio remains a significant research area. The issue addressed in this paper is the automated reduction of environmental audio data to facilitate ecological investigations. We describe a method that first reduces environmental audio to vectors of acoustic indices, which are then clustered. This can reduce the audio data by six to eight orders of magnitude yet retain useful ecological information. We describe techniques to visualise sequences of cluster occurrence (using for example, diel plots, rose plots) that assist interpretation of environmental audio. Colour coding acoustic clusters allows months and years of audio data to be visualised in a single image. These techniques are useful in identifying and indexing the contents of long-duration audio recordings. They could also play an important role in monitoring long-term changes in species abundance brought about by habitat degradation and/or restoration.

  2. Real Time Recognition Of Speakers From Internet Audio Stream

    Directory of Open Access Journals (Sweden)

    Weychan Radoslaw

    2015-09-01

    Full Text Available In this paper we present an automatic speaker recognition technique with the use of the Internet radio lossy (encoded speech signal streams. We show an influence of the audio encoder (e.g., bitrate on the speaker model quality. The model of each speaker was calculated with the use of the Gaussian mixture model (GMM approach. Both the speaker recognition and the further analysis were realized with the use of short utterances to facilitate real time processing. The neighborhoods of the speaker models were analyzed with the use of the ISOMAP algorithm. The experiments were based on four 1-hour public debates with 7–8 speakers (including the moderator, acquired from the Polish radio Internet services. The presented software was developed with the MATLAB environment.

  3. Influence of the Quality of Consumer Headphones in the Perception of Spatial Audio

    Directory of Open Access Journals (Sweden)

    Pablo Gutierrez-Parera

    2016-04-01

    Full Text Available High quality headphones can generate a realistic sound immersion reproducing binaural recordings. However, most people commonly use consumer headphones of inferior quality, as the ones provided with smartphones or music players. Factors, such as weak frequency response, distortion and the sensitivity disparity between the left and right transducers could be some of the degrading factors. In this work, we are studying how these factors affect spatial perception. To this purpose, a series or perceptual tests have been carried out with a virtual headphone listening test methodology. The first experiment focuses on the analysis of how the disparity of sensitivity between the two transducers affects the final result. The second test studies the influence of the frequency response relating quality and spatial impression. The third test analyzes the effects of distortion using a Volterra kernels scheme for the simulation of the distortion using convolutions. Finally, the fourth tries to relate the quality of the frequency response with the accuracy on azimuth localization. The conclusions of the experiments are: the disparity between both transducers can affect the localization of the source; the perception of quality and spatial impression has a high correlation; the distortion produced by the range of headphones tested at a fixed level does not affect the perception of binaural sound; and that some frequency bands have an important role in the front-back confusions.

  4. Deutsch Durch Audio-Visuelle Methode: An Audio-Lingual-Oral Approach to the Teaching of German.

    Science.gov (United States)

    Dickinson Public Schools, ND. Instructional Media Center.

    This teaching guide, designed to accompany Chilton's "Deutsch Durch Audio-Visuelle Methode" for German 1 and 2 in a three-year secondary school program, focuses major attention on the operational plan of the program and a student orientation unit. A section on teaching a unit discusses four phases: (1) presentation, (2) explanation, (3)…

  5. On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

    Directory of Open Access Journals (Sweden)

    Wesley Mattheyses

    2009-01-01

    Full Text Available Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signal's quality.

  6. Design And Construction Of 300W Audio Power Amplifier For Classroom

    Directory of Open Access Journals (Sweden)

    Shune Lei Aung

    2015-07-01

    Full Text Available Abstract This paper describes the design and construction of 300W audio power amplifier for classroom. In the construction of this amplifier microphone preamplifier tone preamplifier equalizer line amplifier output power amplifier and sound level indicator are included. The output power amplifier is designed as O.C.L system and constructed by using Class B among many types of amplifier classes. There are two types in O.C.L system quasi system and complementary system. Between them the complementary system is used in the construction of 300W audio power amplifier. The Multisim software is utilized for the construction of audio power amplifier.

  7. Créer des ressources audio pour le cours de FLE

    Directory of Open Access Journals (Sweden)

    Florence Gérard Lojacono

    2010-01-01

    Full Text Available These last ten years, web applicationshave gained ascendency over the consumersociety as shown by the success of iTunesand the increase of podcasting. The academicworld, particularly in the field oflanguage teaching, could take advantage ofthis massive use of audio files. The creationand the diffusion of customized ad hocaudio files and the broadcast of these resourcesthrough educational podcasts addressthe upcoming challenges of a knowledgebased society. Teaching and learningwith audio files also meet the recommendationsof the European Higher EducationArea (EHEA. This paper will provide languageteachers, especially French teachers,with the tools to create, edit, upload andplay their own audio files. No specific computerskills are required.

  8. A high PSRR Class-D audio amplifier IC based on a self-adjusting voltage reference

    OpenAIRE

    Huffenus , Alexandre; Pillonnet , Gaël; Abouchi , Nacer; Goutti , Frédéric; Rabary , Vincent; Cittadini , Robert

    2010-01-01

    International audience; In a wide range of applications, audio amplifiers require a large Power Supply Rejection Ratio (PSRR) that the current Class-D architecture cannot reach. This paper proposes a self-adjusting internal voltage reference scheme that sets the bias voltages of the amplifier without losing on output dynamics. This solution relaxes the constraints on gain and feedback resistors matching that were previously the limiting factor for the PSRR. Theory of operation, design and IC ...

  9. Pengaruh layanan informasi bimbingan konseling berbantuan media audio visual terhadap empati siswa

    Directory of Open Access Journals (Sweden)

    Rita Kumalasari

    2017-05-01

    The results of research effective of audio-visual media counseling techniques effective and practical to increase the empathy of students are rational design, key concepts, understanding, purpose, content models, the role and qualifications tutor (counselor is expected, procedures or steps in the implementation of the audio-visual, evaluation, follow-up, support system. This research is proven effective in improving student behavior. Empathy behavior of students increases 28.9% from the previous 45.08% increase to 73.98%. This increase occurred in all aspects of empathy Keywords: Effective, Audio visual, Empathy

  10. Quick Response (QR) Codes for Audio Support in Foreign Language Learning

    Science.gov (United States)

    Vigil, Kathleen Murray

    2017-01-01

    This study explored the potential benefits and barriers of using quick response (QR) codes as a means by which to provide audio materials to middle-school students learning Spanish as a foreign language. Eleven teachers of Spanish to middle-school students created transmedia materials containing QR codes linking to audio resources. Students…

  11. Overview of the 2015 Workshop on Speech, Language and Audio in Multimedia

    NARCIS (Netherlands)

    Gravier, Guillaume; Jones, Gareth J.F.; Larson, Martha; Ordelman, Roeland J.F.

    2015-01-01

    The Workshop on Speech, Language and Audio in Multimedia (SLAM) positions itself at at the crossroad of multiple scientific fields - music and audio processing, speech processing, natural language processing and multimedia - to discuss and stimulate research results, projects, datasets and

  12. Audio-visual Classification and Fusion of Spontaneous Affect Data in Likelihood Space

    NARCIS (Netherlands)

    Nicolaou, Mihalis A.; Gunes, Hatice; Pantic, Maja

    2010-01-01

    This paper focuses on audio-visual (using facial expression, shoulder and audio cues) classification of spontaneous affect, utilising generative models for classification (i) in terms of Maximum Likelihood Classification with the assumption that the generative model structure in the classifier is

  13. Can audio recording of outpatient consultations improve patient outcome?

    DEFF Research Database (Denmark)

    Wolderslund, Maiken; Kofoed, Poul-Erik; Axboe, Mette

    different departments: Orthopedics, Urology, Internal Medicine and Pediatrics. A total of 5,460 patients will be included from the outpatient clinics. All patients randomized to an intervention group are offered audio recording of their consultation. An Interactive Voice Response platform enables an audio....... The intervention will be evaluated using a questionnaire measuring different aspect of patients recall and understanding of the information given, patients need for additional information subsequent to the consultation and their overall satisfaction with the consultation. Results The study will be conducted from...

  14. Delivering high quality hip fracture rehabilitation: the perspective of occupational and physical therapy practitioners.

    Science.gov (United States)

    Leland, Natalie E; Lepore, Michael; Wong, Carin; Chang, Sun Hwa; Freeman, Lynn; Crum, Karen; Gillies, Heather; Nash, Paul

    2018-03-01

    The majority of post-acute hip fracture rehabilitation in the US is delivered in skilled nursing facilities (SNFs). Currently, there are limited guidelines that equip occupational and physical therapy practitioners with a summary of what constitutes evidence-based high quality rehabilitation. Thus, this study aimed to identify rehabilitation practitioners' perspectives on the practices that constitute high quality hip fracture rehabilitation. Focus groups were conducted with 99 occupational and physical therapy practitioners working in SNFs in southern California. Purposive sampling of facilities was conducted to capture variation in key characteristics known to impact care delivery for this patient population (e.g., financial resources, staffing, and patient case-mix). Questions aimed to elicit practitioners' perspectives on high quality hip fracture rehabilitation practices. Each session was audio-recorded and transcribed. Data were systematically analyzed using a modified grounded theory approach. Seven themes emerged: objectives of care; first 72 h; positioning, pain, and precautions; use of standardized assessments; episode of care practices; facilitating insight into progress; and interdisciplinary collaboration. Clinical guidelines are critical tools to facilitate clinical decision-making and achieve desired patient outcomes. The findings of this study highlight the practitioners' perspective on what constitutes high quality hip fracture rehabilitation. This work provides critical information to advance the development of stakeholder-driven rehabilitation clinical guidelines. Future research is needed to verify the findings from other stakeholders (e.g., patients), ensure the alignment of our findings with current evidence, and develop measures for evaluating their delivery and relationship to desired outcomes. Implications for Rehabilitation This study highlights occupational and physiotherapy therapy practitioners' perspectives on the cumulative best

  15. Audio Control Handbook For Radio and Television Broadcasting. Third Revised Edition.

    Science.gov (United States)

    Oringel, Robert S.

    Audio control is the operation of all the types of sound equipment found in the studios and control rooms of a radio or television station. Written in a nontechnical style for beginners, the book explains thoroughly the operation of all types of audio equipment. Diagrams and photographs of commercial consoles, microphones, turntables, and tape…

  16. Fault Diagnosis using Audio and Vibration Signals in a Circulating Pump

    International Nuclear Information System (INIS)

    Henríquez, P; Alonso, J B; Ferrer, M A; Travieso, C M; Gómez, G

    2012-01-01

    This paper presents the use of audio and vibration signals in fault diagnosis of a circulating pump. The novelty of this paper is the use of audio signals acquired by microphones. The objective of this paper is to determine if audio signals are capable to distinguish between normal and different abnormal conditions in a circulating pump. In order to compare results, vibration signals are also acquired and analysed. Wavelet package is used to obtain the energies in different frequency bands from the audio and vibration signals. Neural networks are used to evaluate the discrimination ability of the extracted features between normal and fault conditions. The results show that information from sound signals can distinguish between normal and different faulty conditions with a success rate of 83.33%, 98% and 91.33% for each microphone respectively. These success rates are similar and even higher that those obtained from accelerometers (68%, 90.67% and 71.33% for each accelerometer respectively). Success rates also show that the position of microphones and accelerometers affects on the final results.

  17. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  18. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  19. Turkish Music Genre Classification using Audio and Lyrics Features

    Directory of Open Access Journals (Sweden)

    Önder ÇOBAN

    2017-05-01

    Full Text Available Music Information Retrieval (MIR has become a popular research area in recent years. In this context, researchers have developed music information systems to find solutions for such major problems as automatic playlist creation, hit song detection, and music genre or mood classification. Meta-data information, lyrics, or melodic content of music are used as feature resource in previous works. However, lyrics do not often used in MIR systems and the number of works in this field is not enough especially for Turkish. In this paper, firstly, we have extended our previously created Turkish MIR (TMIR dataset, which comprises of Turkish lyrics, by including the audio file of each song. Secondly, we have investigated the effect of using audio and textual features together or separately on automatic Music Genre Classification (MGC. We have extracted textual features from lyrics using different feature extraction models such as word2vec and traditional Bag of Words. We have conducted our experiments on Support Vector Machine (SVM algorithm and analysed the impact of feature selection and different feature groups on MGC. We have considered lyrics based MGC as a text classification task and also investigated the effect of term weighting method. Experimental results show that textual features can also be effective as well as audio features for Turkish MGC, especially when a supervised term weighting method is employed. We have achieved the highest success rate as 99,12\\% by using both audio and textual features together.

  20. ANALYSIS OF MULTIMODAL FUSION TECHNIQUES FOR AUDIO-VISUAL SPEECH RECOGNITION

    Directory of Open Access Journals (Sweden)

    D.V. Ivanko

    2016-05-01

    Full Text Available The paper deals with analytical review, covering the latest achievements in the field of audio-visual (AV fusion (integration of multimodal information. We discuss the main challenges and report on approaches to address them. One of the most important tasks of the AV integration is to understand how the modalities interact and influence each other. The paper addresses this problem in the context of AV speech processing and speech recognition. In the first part of the review we set out the basic principles of AV speech recognition and give the classification of audio and visual features of speech. Special attention is paid to the systematization of the existing techniques and the AV data fusion methods. In the second part we provide a consolidated list of tasks and applications that use the AV fusion based on carried out analysis of research area. We also indicate used methods, techniques, audio and video features. We propose classification of the AV integration, and discuss the advantages and disadvantages of different approaches. We draw conclusions and offer our assessment of the future in the field of AV fusion. In the further research we plan to implement a system of audio-visual Russian continuous speech recognition using advanced methods of multimodal fusion.

  1. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  2. Introduction of audio gating to further reduce organ motion in breathing synchronized radiotherapy

    International Nuclear Information System (INIS)

    Kubo, H. Dale; Wang Lili

    2002-01-01

    With breathing synchronized radiotherapy (BSRT), a voltage signal derived from an organ displacement detector is usually displayed on the vertical axis whereas the elapsed time is shown on the horizontal axis. The voltage gate window is set on the breathing voltage signal. Whenever the breathing signal falls between the two gate levels, a gate pulse is produced to enable the treatment machine. In this paper a new gating mechanism, audio (or time-sequence) gating, is introduced and is integrated into the existing voltage gating system. The audio gating takes advantage of the repetitive nature of the breathing signal when repetitive audio instruction is given to the patient. The audio gating is aimed at removing the regions of sharp rises and falls in the breathing signal that cannot be removed by the voltage gating. When the breathing signal falls between voltage gate levels as well as between audio-gate levels, the voltage- and audio-gated radiotherapy (ART) system will generate an AND gate pulse. When this gate pulse is received by a linear accelerator, the linear accelerator becomes 'enabled' for beam delivery and will deliver the beam when all other interlocks are removed. This paper describes a new gating mechanism and a method of recording beam-on signal, both of which are, configured into a laptop computer. The paper also presents evidence of some clinical advantages achieved with the ART system

  3. Feature Fusion Based Audio-Visual Speaker Identification Using Hidden Markov Model under Different Lighting Variations

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs and Linear Prediction Cepstral Coefficients (LPCCs are combined to get the audio feature vectors and Active Shape Model (ASM based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.

  4. Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio

    Science.gov (United States)

    Hoffmann, Matthew Douglas

    Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create

  5. Secondary Analysis of Audio Data. Technical Procedures for Virtual Anonymization and Pseudonymization

    Directory of Open Access Journals (Sweden)

    Henning Pätzold

    2005-01-01

    Full Text Available Qualitative material presented as audio data requires a greater degree of protecting of anonymity than for example textual data. Apart from the verbal content, it carries paraverbal aspects including voice characteristics, thus making it easier to identify the speaker. This complicates secondary analysis or reanalysis conducted by researchers who were not involved in the data collection. Difficulties increase if the chances are high that the researcher and the interviewee come in contact for example through a meeting. This paper describes the technical procedures that are used to modify the sound of the audio source in a way that it reduces the possibility of recognition (i.e. similar to that of a carefully written transcript. A discussion of the technical possibilities of this procedure along with an exploration of the boundaries of anonymization is presented. URN: urn:nbn:de:0114-fqs0501249

  6. Audio segmentation of broadcast news in the Albayzin-2010 evaluation: overview, results, and discussion

    Directory of Open Access Journals (Sweden)

    Butko Taras

    2011-01-01

    Full Text Available Abstract Recently, audio segmentation has attracted research interest because of its usefulness in several applications like audio indexing and retrieval, subtitling, monitoring of acoustic scenes, etc. Moreover, a previous audio segmentation stage may be useful to improve the robustness of speech technologies like automatic speech recognition and speaker diarization. In this article, we present the evaluation of broadcast news audio segmentation systems carried out in the context of the Albayzín-2010 evaluation campaign. That evaluation consisted of segmenting audio from the 3/24 Catalan TV channel into five acoustic classes: music, speech, speech over music, speech over noise, and the other. The evaluation results displayed the difficulty of this segmentation task. In this article, after presenting the database and metric, as well as the feature extraction methods and segmentation techniques used by the submitted systems, the experimental results are analyzed and compared, with the aim of gaining an insight into the proposed solutions, and looking for directions which are promising.

  7. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    Science.gov (United States)

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be

  8. An ESL Audio-Script Writing Workshop

    Science.gov (United States)

    Miller, Carla

    2012-01-01

    The roles of dialogue, collaborative writing, and authentic communication have been explored as effective strategies in second language writing classrooms. In this article, the stages of an innovative, multi-skill writing method, which embeds students' personal voices into the writing process, are explored. A 10-step ESL Audio Script Writing Model…

  9. Digital signal processing methods and algorithms for audio conferencing systems

    OpenAIRE

    Lindström, Fredric

    2007-01-01

    Today, we are interconnected almost all over the planet. Large multinational companies operate worldwide, but also an increasing number of small and medium sized companies do business overseas. As people travel to meet and do businesses, the already exposed earth is subject to even more strain. Audio conferencing is an attractive alternative to travel, which is becoming more and more appreciated. Audio conferences can of course not replace all types of meetings, but can help companies to cut ...

  10. One Message, Many Voices: Mobile Audio Counselling in Health Education.

    Science.gov (United States)

    Pimmer, Christoph; Mbvundula, Francis

    2018-01-01

    Health workers' use of counselling information on their mobile phones for health education is a central but little understood phenomenon in numerous mobile health (mHealth) projects in Sub-Saharan Africa. Drawing on empirical data from an interpretive case study in the setting of the Millennium Villages Project in rural Malawi, this research investigates the ways in which community health workers (CHWs) perceive that audio-counselling messages support their health education practice. Three main themes emerged from the analysis: phone-aided audio counselling (1) legitimises the CHWs' use of mobile phones during household visits; (2) helps CHWs to deliver a comprehensive counselling message; (3) supports CHWs in persuading communities to change their health practices. The findings show the complexity and interplay of the multi-faceted, sociocultural, political, and socioemotional meanings associated with audio-counselling use. Practical implications and the demand for further research are discussed.

  11. Impact of Audio-Coaching on the Position of Lung Tumors

    International Nuclear Information System (INIS)

    Haasbeek, Cornelis J.A.; Spoelstra, Femke; Lagerwaard, Frank J.; Soernsen de Koste, John R. van; Cuijpers, Johan P.; Slotman, Ben J.; Senan, Suresh

    2008-01-01

    Purpose: Respiration-induced organ motion is a major source of positional, or geometric, uncertainty in thoracic radiotherapy. Interventions to mitigate the impact of motion include audio-coached respiration-gated radiotherapy (RGRT). To assess the impact of coaching on average tumor position during gating, we analyzed four-dimensional computed tomography (4DCT) scans performed both with and without audio-coaching. Methods and Materials: Our RGRT protocol requires that an audio-coached 4DCT scan is performed when the initial free-breathing 4DCT indicates a potential benefit with gating. We retrospectively analyzed 22 such paired scans in patients with well-circumscribed tumors. Changes in lung volume and position of internal target volumes (ITV) generated in three consecutive respiratory phases at both end-inspiration and end-expiration were analyzed. Results: Audio-coaching increased end-inspiration lung volumes by a mean of 10.2% (range, -13% to +43%) when compared with free breathing (p = 0.001). The mean three-dimensional displacement of the center of ITV was 3.6 mm (SD, 2.5; range, 0.3-9.6mm), mainly caused by displacement in the craniocaudal direction. Displacement of ITV caused by coaching was more than 5 mm in 5 patients, all of whom were in the subgroup of 9 patients showing total tumor motion of 10 mm or more during both coached and uncoached breathing. Comparable ITV displacements were observed at end-expiration phases of the 4DCT. Conclusions: Differences in ITV position exceeding 5 mm between coached and uncoached 4DCT scans were detected in up to 56% of mobile tumors. Both end-inspiration and end-expiration RGRT were susceptible to displacements. This indicates that the method of audio-coaching should remain unchanged throughout the course of treatment

  12. Robustness evaluation of transactional audio watermarking systems

    Science.gov (United States)

    Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg

    2003-06-01

    Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.

  13. Realisierung eines verzerrungsarmen Open-Loop Klasse-D Audio-Verstärkers mit SB-ZePoC

    Directory of Open Access Journals (Sweden)

    O. Schnick

    2007-06-01

    Full Text Available In den letzten Jahren hat die Entwicklung von Klasse-D Verstärkern für Audio-Anwendungen ein vermehrtes Interesse auf sich gezogen. Eine Motivation hierfür liegt in der mit dieser Technik extrem hohen erzielbaren Effizienz von über 90%. Die Signale, die Klasse-D Verstärker steuern, sind binär. Immer mehr Audio-Signale werden entweder digital gespeichert (CD, DVD, MP3 oder digital übermittelt (Internet, DRM, DAB, DVB-T, DVB-S, GMS, UMTS, weshalb eine direkte Umsetzung dieser Daten in ein binäres Steuersignal ohne vorherige konventionelle D/A-Wandlung erstrebenswert erscheint.

    Die klassischen Pulsweitenmodulationsverfahren führen zu Aliasing-Komponenten im Audio-Basisband. Diese Verzerrungen können nur durch eine sehr hohe Schaltfrequenz auf ein akzeptables Maß reduziert werden. Durch das von der Forschungsgruppe um Prof. Mathis vorgestellte SB-ZePoC Verfahren (Zero Position Coding with Separated Baseband wird diese Art der Signalverzerrung durch Generierung eines separierten Basisbands verhindert. Deshalb können auch niedrige Schaltfrequenzen gewählt werden. Dadurch werden nicht nur die Schaltverluste, sondern auch Timing-Verzerrungen verringert, die durch die nichtideale Schaltendstufe verursacht werden. Diese tragen einen großen Anteil zu den gesamten Verzerrungen eines Klasse-D Verstärkers bei. Mit dem SB-ZePoC Verfahren lassen sich verzerrungsarme Open-Loop Klasse-D Audio-Verstärker realisieren, die ohne aufwändige Gegenkopplungsschleifen auskommen.

    Class-D amplifiers are suiteble for amplification of audio signals. One argument is their high efficiency of 90% and more. Today most of the audio signals are stored or transmitted in digital form. A digitally controlled Class-D amplifier can be directly driven with coded (modulated data. No separate D/A conversion is needed. Classical modulation schemes like Pulse-Width-Modulation (PWM cause aliasing. So a very high switching rate is required to minimize the

  14. audio-ultrasonic waves by argon gas discharge

    International Nuclear Information System (INIS)

    Ragheb, M.S.

    2010-01-01

    in the present work, wave emission formed by audio-ultrasonic plasma is investigated. the evidence of the magnetic and electric fields presence is performed by experimental technique. comparison between experimental field measurements and several plasma wave methods reveals the plasma audio-ultrasonic radiations mode. this plasma is a symmetrically driven capacitive discharge, consisting of three interactive regions: the electrodes, the sheaths, and the positive column regions . the discharge voltage is up to 900 volts, the discharge current flowing through the plasma attains a value of 360 mA .the frequency of the discharge voltage covers the audio and the ultrasonic range up to 100 khz. the effective plasma working distance has increased to attain the total length of the tube of 40 cm. a non-disturbing method using an external coil is used to measure the electric discharge field in a plane perpendicular to that of the plasma axe tube. this method proves the existence of a current flowing in a direction perpendicular to the plasma axe tube. a system of minute coils sensors proved the existence of two fields in two perpendicular directions . comparison between different observed fields reveals the existence of propagating electromagnetic waves due to the alternating current flowing through the skin plasma tube. the field intensity distribution along the tube draws the discharge current behavior between the two plasma electrodes that can be used to predict the range of the plasma discharge current.

  15. Automatic Detection and Classification of Audio Events for Road Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Noor Almaadeed

    2018-06-01

    Full Text Available This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  16. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  17. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    Science.gov (United States)

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  18. Study of audio speakers containing ferrofluid

    Energy Technology Data Exchange (ETDEWEB)

    Rosensweig, R E [34 Gloucester Road, Summit, NJ 07901 (United States); Hirota, Y; Tsuda, S [Ferrotec, 1-4-14 Kyobashi, chuo-Ku, Tokyo 104-0031 (Japan); Raj, K [Ferrotec, 33 Constitution Drive, Bedford, NH 03110 (United States)

    2008-05-21

    This work validates a method for increasing the radial restoring force on the voice coil in audio speakers containing ferrofluid. In addition, a study is made of factors influencing splash loss of the ferrofluid due to shock. Ferrohydrodynamic analysis is employed throughout to model behavior, and predictions are compared to experimental data.

  19. Audio engineering 101 a beginner's guide to music production

    CERN Document Server

    Dittmar, Tim

    2013-01-01

    Audio Engineering 101 is a real world guide for starting out in the recording industry. If you have the dream, the ideas, the music and the creativity but don't know where to start, then this book is for you!Filled with practical advice on how to navigate the recording world, from an author with first-hand, real-life experience, Audio Engineering 101 will help you succeed in the exciting, but tough and confusing, music industry. Covering all you need to know about the recording process, from the characteristics of sound to a guide to microphones to analog versus digital

  20. GaN Power Stage for Switch-mode Audio Amplification

    DEFF Research Database (Denmark)

    Ploug, Rasmus Overgaard; Knott, Arnold; Poulsen, Søren Bang

    2015-01-01

    Gallium Nitride (GaN) based power transistors are gaining more and more attention since the introduction of the enhancement mode eGaN Field Effect Transistor (FET) which makes an adaptation from Metal-Oxide Semiconductor (MOSFET) to eGaN based technology less complex than by using depletion mode Ga......N FETs. This project seeks to investigate the possibilities of using eGaN FETs as the power switching device in a full bridge power stage intended for switch mode audio amplification. A 50 W 1 MHz power stage was built and provided promising audio performance. Future work includes optimization of dead...

  1. Focus on Hinduism: Audio-Visual Resources for Teaching Religion. Occasional Publication No. 23.

    Science.gov (United States)

    Dell, David; And Others

    The guide presents annotated lists of audio and visual materials about the Hindu religion. The authors point out that Hinduism cannot be comprehended totally by reading books; thus the resources identified in this guide will enhance understanding based on reading. The guide is intended for use by high school and college students, teachers,…

  2. Audio- and TV-products. Power consumption reduction in audio- and TV-products. Final report; Audio- og TV-produkter. Effektminimering i audio- og TV-produkter: Afsluttende rapport

    Energy Technology Data Exchange (ETDEWEB)

    Kierkegaard, P.

    1998-10-01

    The project concerning the audio products resulted in energy savings of 90-97% at efficiencies of 91-96% with full effect and stand-by losses of 0.4-3 W. It is especially new epoch-making methods for pulse modulation (called Controlled Oscillation Modulator, COM and Phase Shifted Carrier Pulse Width Modulation, PSCPWM) and error for correction in the effect conversion (called Multivariable Enhanced Cascade Control, MECC and Pulse Edge Delay Error Correction, PEDEC), which has made the breakthrough. Two patents have been applied for, and new digital amplifiers will be introduced in all the relevant products. The project concerning TV products has shown that a loss reduction in deflecting circuits of ca.20 % may be obtained. (EHS)

  3. Small signal audio design

    CERN Document Server

    Self, Douglas

    2014-01-01

    Learn to use inexpensive and readily available parts to obtain state-of-the-art performance in all the vital parameters of noise, distortion, crosstalk and so on. With ample coverage of preamplifiers and mixers and a new chapter on headphone amplifiers, this practical handbook provides an extensive repertoire of circuits that can be put together to make almost any type of audio system.A resource packed full of valuable information, with virtually every page revealing nuggets of specialized knowledge not found elsewhere. Essential points of theory that bear on practical performance are lucidly

  4. Audio watermarking robust against D/A and A/D conversions

    Directory of Open Access Journals (Sweden)

    Xiang Shijun

    2011-01-01

    Full Text Available Abstract Digital audio watermarking robust against digital-to-analog (D/A and analog-to-digital (A/D conversions is an important issue. In a number of watermark application scenarios, D/A and A/D conversions are involved. In this article, we first investigate the degradation due to DA/AD conversions via sound cards, which can be decomposed into volume change, additional noise, and time-scale modification (TSM. Then, we propose a solution for DA/AD conversions by considering the effect of the volume change, additional noise and TSM. For the volume change, we introduce relation-based watermarking method by modifying groups of the energy relation of three adjacent DWT coefficient sections. For the additional noise, we pick up the lowest-frequency coefficients for watermarking. For the TSM, the synchronization technique (with synchronization codes and an interpolation processing operation is exploited. Simulation tests show the proposed audio watermarking algorithm provides a satisfactory performance to DA/AD conversions and those common audio processing manipulations.

  5. Comparative evaluation of the effectiveness of audio and audiovisual distraction aids in the management of anxious pediatric dental patients

    Directory of Open Access Journals (Sweden)

    Rajwinder Kaur

    2015-01-01

    Full Text Available Objective: The aim of this study was to evaluate and compare audio and audiovisual distraction aids in management of anxious pediatric dental patients of different age groups and to study children′s response to sequential dental visits with the use of distraction aids. Study Design: This study was conducted on two age groups, that is, 4-6 years and 6-8 years with 30 patients in each age group on their first dental visit. The children of both the age groups were divided into 3 subgroups, the control group, audio distraction group, audiovisual distraction group with 10 patients in each subgroup. Each child in all the subgroups had gone through three dental visits. Child anxiety level at each visit was assessed by using a combination of anxiety measuring parameters. The data collected was tabulated and subjected to statistical analysis. Results: Tukey honest significant difference post-hoc test at 0.05% level of significance revealed audiovisual group showed statistically highly significant difference from audio and control group, whereas audio group showed the statistically significant difference from the control group. Conclusion: Audiovisual distraction was found to be a more effective mode of distraction in the management of anxious children in both the age groups when compared to audio distraction. In both the age groups, a significant effect of the visit type was also observed.

  6. Comparative evaluation of the effectiveness of audio and audiovisual distraction aids in the management of anxious pediatric dental patients.

    Science.gov (United States)

    Kaur, Rajwinder; Jindal, Ritu; Dua, Rohini; Mahajan, Sandeep; Sethi, Kunal; Garg, Sunny

    2015-01-01

    The aim of this study was to evaluate and compare audio and audiovisual distraction aids in management of anxious pediatric dental patients of different age groups and to study children's response to sequential dental visits with the use of distraction aids. This study was conducted on two age groups, that is, 4-6 years and 6-8 years with 30 patients in each age group on their first dental visit. The children of both the age groups were divided into 3 subgroups, the control group, audio distraction group, audiovisual distraction group with 10 patients in each subgroup. Each child in all the subgroups had gone through three dental visits. Child anxiety level at each visit was assessed by using a combination of anxiety measuring parameters. The data collected was tabulated and subjected to statistical analysis. Tukey honest significant difference post-hoc test at 0.05% level of significance revealed audiovisual group showed statistically highly significant difference from audio and control group, whereas audio group showed the statistically significant difference from the control group. Audiovisual distraction was found to be a more effective mode of distraction in the management of anxious children in both the age groups when compared to audio distraction. In both the age groups, a significant effect of the visit type was also observed.

  7. Computerized Audio-Visual Instructional Sequences (CAVIS): A Versatile System for Listening Comprehension in Foreign Language Teaching.

    Science.gov (United States)

    Aleman-Centeno, Josefina R.

    1983-01-01

    Discusses the development and evaluation of CAVIS, which consists of an Apple microcomputer used with audiovisual dialogs. Includes research on the effects of three conditions: (1) computer with audio and visual, (2) computer with audio alone and (3) audio alone in short-term and long-term recall. (EKN)

  8. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Directory of Open Access Journals (Sweden)

    Kirsten E Smayda

    Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger

  9. The relationship between target quality and interference in sound zones

    DEFF Research Database (Denmark)

    Baykaner, Khan; Coleman, Phillip; Mason, Russell

    2015-01-01

    Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced...... audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity...

  10. Approaches to building single-stage AC/AC conversion switch-mode audio power amplifiers

    DEFF Research Database (Denmark)

    Ljusev, Petar; Andersen, Michael Andreas E.

    2004-01-01

    This paper discusses the possible topologies and promising approaches towards direct single-phase AC-AC conversion of the mains voltage for audio applications. When compared to standard Class-D switching audio power amplifiers with a separate power supply, it is expected that direct conversion...

  11. Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices

    Science.gov (United States)

    Aroca, Rafael V.; Burlamaqui, Aquiles F.; Gonçalves, Luiz M. G.

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks. PMID:22438726

  12. Method for reading sensors and controlling actuators using audio interfaces of mobile devices.

    Science.gov (United States)

    Aroca, Rafael V; Burlamaqui, Aquiles F; Gonçalves, Luiz M G

    2012-01-01

    This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.

  13. Transcript of Audio Narrative Portion of: Scandinavian Heritage. A Set of Five Audio-Visual Film Strip/Cassette Presentations.

    Science.gov (United States)

    Anderson, Gerald D.; Olson, David B.

    The document presents the transcript of the audio narrative portion of approximately 100 interviews with first and second generation Scandinavian immigrants to the United States. The document is intended for use by secondary school classroom teachers as they develop and implement educational programs related to the Scandinavian heritage in…

  14. The Transfer of Learning Associated with Audio Feedback on Written Work

    Directory of Open Access Journals (Sweden)

    Tanya Martini

    2014-11-01

    Full Text Available This study examined whether audio feedback provided to undergraduates (N=51 about one paper would prove beneficial in terms of improving their grades on another, unrelated paper of the same type. We examined this issue both in terms of student beliefs about learning transfer, as well as their actual ability to transfer what had been learned on one assignment to another, subsequent assignment. Results indicated that students believed that they would be able to transfer what they had learned via audio feedback. Moreover, results also suggested that students actually did generalize the overarching comments about content and structure made in the audio files to a subsequent paper, the content of which differed substantially from the initial one. Both students and teaching assistants demonstrated very favourable responses to this type of feedback, suggesting that it was both clear and comprehensive.

  15. Guided Expectations: A Case Study of a Sound Collage Audio Guide

    DEFF Research Database (Denmark)

    Laursen, Ditte

    This paper is a user evaluation of a mobile phone audio guide developed for visitors to use at the National Gallery of Denmark. The audio guide is offered as a downloadable MP3 file to every incoming visitor who is carrying a mobile phone with an open Bluetooth connection. The guide itself...... according to personal interest, and a conflict between the expectation of a learning experience rather than an aesthetic experience. Results indicate that most visitors are able to make sense of the guide and to use it successfully, in different ways, to enrich their visit. Evaluation also shows...... that visitors are fond of using their own mobile phones - but they have several problems with their phones in downloading the MP3 file. Read more: Guided Expectations: A Case Study of a Sound Collage Audio Guide | conference.archimuse.com...

  16. Cambridge English First 2 audio CDs : authentic examination papers

    CERN Document Server

    2016-01-01

    Four authentic Cambridge English Language Assessment examination papers for the Cambridge English: First (FCE) exam. These examination papers for the Cambridge English: First (FCE) exam provide the most authentic exam preparation available, allowing candidates to familiarise themselves with the content and format of the exam and to practise useful exam techniques. The Audio CDs contain the recorded material to allow thorough preparation for the Listening paper and are designed to be used with the Student's Book. A Student's Book with or without answers and a Student's Book with answers and downloadable Audio are available separately. These tests are also available as Cambridge English: First Tests 5-8 on Testbank.org.uk

  17. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  18. An Analog I/O Interface Board for Audio Arduino Open Sound Card System

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    AudioArduino [1] is a system consisting of an ALSA (Advanced Linux Sound Architecture) audio driver and corresponding microcontroller code; that can demonstrate full-duplex, mono, 8-bit, 44.1 kHz soundcard behavior on an FTDI based Arduino. While the basic operation as a soundcard can...

  19. Minimizing Crosstalk in Self Oscillating Switch Mode Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Knott, Arnold; Ploug, Rasmus Overgaard

    2012-01-01

    a method to minimize this phenomenon by improving the integrity of the various power distribution systems of the amplifier. The method is then applied to an amplifier built for this investigation. The results show that the crosstalk is suppressed with 30 dB, but is not entirely eliminated......The varying switching frequencies of self oscillating switch mode audio amplifiers have been known to cause interchannel intermodulation disturbances in multi channel configurations. This crosstalk phenomenon has a negative impact on the audio performance. The goal of this paper is to present...

  20. Convolution-based classification of audio and symbolic representations of music

    DEFF Research Database (Denmark)

    Velarde, Gissel; Cancino Chacón, Carlos; Meredith, David

    2018-01-01

    We present a novel convolution-based method for classification of audio and symbolic representations of music, which we apply to classification of music by style. Pieces of music are first sampled to pitch–time representations (piano-rolls or spectrograms) and then convolved with a Gaussian filter......-class composer identification, methods specialised for classifying symbolic representations of music are more effective. We also performed experiments on symbolic representations, synthetic audio and two different recordings of The Well-Tempered Clavier by J. S. Bach to study the method’s capacity to distinguish...

  1. Derivation and Analysis of a Low-Cost, High-performance Analogue BPCM Control Scheme for Class-D Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael A. E.

    2005-01-01

    This paper presents a low-cost analogue control scheme for class-D audio power amplifiers. The scheme is based around bandpass current-mode (BPCM) control, and provides ample stability margins and low distortion over a wide range of operating conditions. Implementation is very simple and does...

  2. The audio and visual communication systems for suited engineering activities on JET

    International Nuclear Information System (INIS)

    Pearce, R.J.H.; Bruce, J.; Callaghan, C.; Hart, M.; Martin, P.; Middleton, R.; Tait, J.

    2001-01-01

    The beryllium and/or tritium contamination of the JET tokamak and auxiliary systems necessitates that many activities are carried out in air line fed pressurised suits. To enable often complex engineering activities to be performed, a number of novel audio and visual and communications systems have been designed. The paper describes these systems which give freedom of visual and audio communication between suited personnel, supervisors, operators and engineers. The system enhances the safety of the working environment as well as helping to minimise the radiation dose to personnel. It is concluded, from a number of years experience of using the audio and visual communications systems for suited operations, that safety and the progress of complex engineering tasks have been significantly enhanced

  3. The audio and visual communication systems for suited engineering activities on JET

    Energy Technology Data Exchange (ETDEWEB)

    Pearce, R.J.H. E-mail: robert.pearce@jet.uk; Bruce, J.; Callaghan, C.; Hart, M.; Martin, P.; Middleton, R.; Tait, J

    2001-11-01

    The beryllium and/or tritium contamination of the JET tokamak and auxiliary systems necessitates that many activities are carried out in air line fed pressurised suits. To enable often complex engineering activities to be performed, a number of novel audio and visual and communications systems have been designed. The paper describes these systems which give freedom of visual and audio communication between suited personnel, supervisors, operators and engineers. The system enhances the safety of the working environment as well as helping to minimise the radiation dose to personnel. It is concluded, from a number of years experience of using the audio and visual communications systems for suited operations, that safety and the progress of complex engineering tasks have been significantly enhanced.

  4. Procedural Audio in Computer Games Using Motion Controllers: An Evaluation on the Effect and Perception

    Directory of Open Access Journals (Sweden)

    Niels Böttcher

    2013-01-01

    Full Text Available A study has been conducted into whether the use of procedural audio affects players in computer games using motion controllers. It was investigated whether or not (1 players perceive a difference between detailed and interactive procedural audio and prerecorded audio, (2 the use of procedural audio affects their motor-behavior, and (3 procedural audio affects their perception of control. Three experimental surveys were devised, two consisting of game sessions and the third consisting of watching videos of gameplay. A skiing game controlled by a Nintendo Wii balance board and a sword-fighting game controlled by a Wii remote were implemented with two versions of sound, one sample based and the other procedural based. The procedural models were designed using a perceptual approach and by alternative combinations of well-known synthesis techniques. The experimental results showed that, when being actively involved in playing or purely observing a video recording of a game, the majority of participants did not notice any difference in sound. Additionally, it was not possible to show that the use of procedural audio caused any consistent change in the motor behavior. In the skiing experiment, a portion of players perceived the control of the procedural version as being more sensitive.

  5. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    Science.gov (United States)

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A first demonstration of audio-frequency optical coherence elastography of tissue

    Science.gov (United States)

    Adie, Steven G.; Alexandrov, Sergey A.; Armstrong, Julian J.; Kennedy, Brendan F.; Sampson, David D.

    2008-12-01

    Optical elastography is aimed at using the visco-elastic properties of soft tissue as a contrast mechanism, and could be particularly suitable for high-resolution differentiation of tumour from surrounding normal tissue. We present a new approach to measure the effect of an applied stimulus in the kilohertz frequency range that is based on optical coherence tomography. We describe the approach and present the first in vivo optical coherence elastography measurements in human skin at audio excitation frequencies.

  7. Agency Video, Audio and Imagery Library

    Science.gov (United States)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  8. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  9. Mixed-Signal Architectures for High-Efficiency and Low-Distortion Digital Audio Processing and Power Amplification

    Directory of Open Access Journals (Sweden)

    Pierangelo Terreni

    2010-01-01

    Full Text Available The paper addresses the algorithmic and architectural design of digital input power audio amplifiers. A modelling platform, based on a meet-in-the-middle approach between top-down and bottom-up design strategies, allows a fast but still accurate exploration of the mixed-signal design space. Different amplifier architectures are configured and compared to find optimal trade-offs among different cost-functions: low distortion, high efficiency, low circuit complexity and low sensitivity to parameter changes. A novel amplifier architecture is derived; its prototype implements digital processing IP macrocells (oversampler, interpolating filter, PWM cross-point deriver, noise shaper, multilevel PWM modulator, dead time compensator on a single low-complexity FPGA while off-chip components are used only for the power output stage (LC filter and power MOS bridge; no heatsink is required. The resulting digital input amplifier features a power efficiency higher than 90% and a total harmonic distortion down to 0.13% at power levels of tens of Watts. Discussions towards the full-silicon integration of the mixed-signal amplifier in embedded devices, using BCD technology and targeting power levels of few Watts, are also reported.

  10. PENGEMBANGAN MULTIMEDIA PEMBELAJARAN FISIKA BERBASIS AUDIO-VIDEO EKSPERIMEN LISTRIK DINAMIS DI SMP

    Directory of Open Access Journals (Sweden)

    P. Rante

    2013-10-01

    Full Text Available Penelitian pengembangan ini dilakukan dengan tujuan untuk melihat profil pengembangan multimedia pembelajaran fisika berbasis audio-video eksperimen listrik dinamis yang dapat menjadi solusi ketidakterlaksanaan praktikum di sekolah. Hasil penelitian menunjukkan bahwa propil multimedia berbasis audio-video eksperimen dari segi tampilan menarik, fasilitas runtut, sistematis dan praktis digunakan serta menjadi solusi ketidakterlaksanaan praktikum di sekolah. Produk akhir adalah sebuah paket CD autorun multimedia pembelajaran interaktif sebagai media pembelajaran mandiri dan sebagai media presentase yang dilengkapi perangkat pembelajaran untuk guru. This research aims to see the profile of multimedia learning development on physics based audio-video on the topic dynamic electricity experiment that may become a solution of practicum that not mastered well in the school. The result shows that the profile of develop multimedia based audio-video experiment has interesting display, harmonious facilities, systematic and practical in used as well as become a solution of the practicum that not mastered yet. The final product produced an auto run CD package of interactive learning multimedia as a self learning media and as a representation of media that equipped with teaching and learning media for teacher.

  11. Distortion-Free 1-Bit PWM Coding for Digital Audio Signals

    Directory of Open Access Journals (Sweden)

    John Mourjopoulos

    2007-01-01

    Full Text Available Although uniformly sampled pulse width modulation (UPWM represents a very efficient digital audio coding scheme for digital-to-analog conversion and full-digital amplification, it suffers from strong harmonic distortions, as opposed to benign non-harmonic artifacts present in analog PWM (naturally sampled PWM, NPWM. Complete elimination of these distortions usually requires excessive oversampling of the source PCM audio signal, which results to impractical realizations of digital PWM systems. In this paper, a description of digital PWM distortion generation mechanism is given and a novel principle for their minimization is proposed, based on a process having some similarity to the dithering principle employed in multibit signal quantization. This conditioning signal is termed “jither” and it can be applied either in the PCM amplitude or the PWM time domain. It is shown that the proposed method achieves significant decrement of the harmonic distortions, rendering digital PWM performance equivalent to that of source PCM audio, for mild oversampling (e.g., ×4 resulting to typical PWM clock rates of 90 MHz.

  12. Distortion-Free 1-Bit PWM Coding for Digital Audio Signals

    Directory of Open Access Journals (Sweden)

    Mourjopoulos John

    2007-01-01

    Full Text Available Although uniformly sampled pulse width modulation (UPWM represents a very efficient digital audio coding scheme for digital-to-analog conversion and full-digital amplification, it suffers from strong harmonic distortions, as opposed to benign non-harmonic artifacts present in analog PWM (naturally sampled PWM, NPWM. Complete elimination of these distortions usually requires excessive oversampling of the source PCM audio signal, which results to impractical realizations of digital PWM systems. In this paper, a description of digital PWM distortion generation mechanism is given and a novel principle for their minimization is proposed, based on a process having some similarity to the dithering principle employed in multibit signal quantization. This conditioning signal is termed "jither" and it can be applied either in the PCM amplitude or the PWM time domain. It is shown that the proposed method achieves significant decrement of the harmonic distortions, rendering digital PWM performance equivalent to that of source PCM audio, for mild oversampling (e.g., resulting to typical PWM clock rates of 90 MHz.

  13. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  14. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Directory of Open Access Journals (Sweden)

    Jean-Luc Schwartz

    2014-07-01

    Full Text Available An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  15. Automatic summarization of soccer highlights using audio-visual descriptors.

    Science.gov (United States)

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  16. Active Electromagnetic Interference Cancelation for Automotive Switch-Mode Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Knott, Arnold; Pfaffinger, Gerhard; Andersen, Michael A. E.

    2009-01-01

    Recent trends in the automotive audio industry have shown the importance of active noise cancelation (ANC) for major improvements in mobile entertainment environments. These approaches target the acoustical noise in the cabin and superimpose an inverse noise signal to cancel disturbances. Electro......Recent trends in the automotive audio industry have shown the importance of active noise cancelation (ANC) for major improvements in mobile entertainment environments. These approaches target the acoustical noise in the cabin and superimpose an inverse noise signal to cancel disturbances...

  17. Design of a WAV audio player based on K20

    Directory of Open Access Journals (Sweden)

    Xu Yu

    2016-01-01

    Full Text Available The designed player uses the Freescale Company’s MK20DX128VLH7 as the core control ship, and its hardware platform is equipped with VS1003 audio decoder, OLED display interface, USB interface and SD card slot. The player uses the open source embedded real-time operating system μC/OS-II, Freescale USB Stack V4.1.1 and FATFS, and a graphical user interface is developed to improve the user experience based on CGUI. In general, the designed WAV audio player has a strong applicability and a good practical value.

  18. Digitisation of the CERN Audio Archives

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    Since the creation of CERN in 1954 until mid 1980s, the audiovisual service has recorded hundreds of hours of moments of life at CERN on audio tapes. These moments range from inaugurations of new facilities to VIP speeches and general interest cultural seminars The preservation process started in June 2005 On these pictures, we see Waltraud Hug working on an open-reel tape.

  19. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    Science.gov (United States)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  20. Audio Arduino - an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    be considered to be a system, that encompasses design decisions on both hardware and software levels - that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future...... Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller - and nothing more...

  1. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Abdeldjalil Aïssa-El-Bey

    2007-03-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  2. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Aïssa-El-Bey Abdeldjalil

    2007-01-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  3. Estimation of the energy ratio between primary and ambience components in stereo audio data

    NARCIS (Netherlands)

    Harma, A.S.

    2011-01-01

    Stereo audio signal is often modeled as a mixture of instantaneously mixed primary components and uncorrelated ambience components. This paper focuses on the estimation of the primary-to-ambience energy ratio, PAR. This measure is useful for signal decomposition in stereo and multichannel audio

  4. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  5. Formal usability evaluation of audio track widget graphical representation for two-dimensional stage audio mixing interface

    OpenAIRE

    Dewey, Christopher; Wakefield, Jonathan P.

    2017-01-01

    The two-dimensional stage paradigm (2DSP) has been suggested as an alternative audio mixing interface (AMI). This study seeks to refine the 2DSP by formally evaluating graphical track visualisation styles. Track visualisations considered were text only, circles containing text, individually coloured circles containing text, circles colour coded by instrument type with text, icons with text superimposed, circles with RMS related dynamic opacity and a traditional AMI. The usability evaluation f...

  6. Nonspeech audio in user interfaces for TV

    NARCIS (Netherlands)

    Sluis, van de Richard; Eggen, J.H.; Rypkema, J.A.

    1997-01-01

    This study explores the end-user benefits of using nonspeech audio in television user interfaces. A prototype of an Electronic Programme Guide (EPG) served as a carrier for the research. One of the features of this EPG is the possibility to search for TV programmes in a category-based way. The EPG

  7. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  8. Superwideband Bandwidth Extension Using Normalized MDCT Coefficients for Scalable Speech and Audio Coding

    Directory of Open Access Journals (Sweden)

    Young Han Lee

    2013-01-01

    Full Text Available A bandwidth extension (BWE algorithm from wideband to superwideband (SWB is proposed for a scalable speech/audio codec that uses modified discrete cosine transform (MDCT coefficients as spectral parameters. The superwideband is first split into several subbands that are represented as gain parameters and normalized MDCT coefficients in the proposed BWE algorithm. We then estimate normalized MDCT coefficients of the wideband to be fetched for the superwideband and quantize the fetch indices. After that, we quantize gain parameters by using relative ratios between adjacent subbands. The proposed BWE algorithm is embedded into a standard superwideband codec, the SWB extension of G.729.1 Annex E, and its bitrate and quality are compared with those of the BWE algorithm already employed in the standard superwideband codec. It is shown from the comparison that the proposed BWE algorithm relatively reduces the bitrate by around 19% with better quality, compared to the BWE algorithm in the SWB extension of G.729.1 Annex E.

  9. Interpolation Filter Design for Hearing-Aid Audio Class-D Output Stage Application

    DEFF Research Database (Denmark)

    Pracný, Peter; Bruun, Erik; Llimos Muntal, Pere

    2012-01-01

    This paper deals with a design of a digital interpolation filter for a 3rd order multi-bit ΣΔ modulator with over-sampling ratio OSR = 64. The interpolation filter and the ΣΔ modulator are part of the back-end of an audio signal processing system in a hearing-aid application. The aim in this paper...... is to compare this design to designs presented in other state-of-the-art works ranging from hi-fi audio to hearing-aids. By performing comparison, trends and tradeoffs in interpolation filter design are indentified and hearing-aid specifications are derived. The possibilities for hardware reduction...... in the interpolation filter are investigated. Proposed design simplifications presented here result in the least hardware demanding combination of oversampling ratio, number of stages and number of filter taps among a number of filters reported for audio applications....

  10. Virtual environment display for a 3D audio room simulation

    Science.gov (United States)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  11. Real-time Loudspeaker Distance Estimation with Stereo Audio

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Gaubitch, Nikolay; Heusdens, Richard

    2015-01-01

    Knowledge on how a number of loudspeakers are positioned relative to a listening position can be used to enhance the listening experience. Usually, these loudspeaker positions are estimated using calibration signals, either audible or psycho-acoustically hidden inside the desired audio signal...

  12. Pengembangan Trainer Equalizer Grafis dan Parametris sebagai Media Pembelajaran Mata Kuliah Praktik Sistem Audio

    Directory of Open Access Journals (Sweden)

    Bekti Wulandari

    2016-01-01

    Full Text Available The objectives of this study were to determine the development process of graphic and parametric equalizer trainer media in the course of Audio System and to examine the quality of the media. The development process of the media referred to the model offered by Lee and Owens. The research procedures consisted of assessment/analysis, designing, implementation and evaluation. The results of the study showed that the percentages for the quality aspect of the media and the companion module were 4.31 and 4.42 respectively. Based on these two aspects, it was obtained the overall percentage that was equal to 4.36. Whereas from the process of the trial by the students, the percentages of the media and the companion module were 4.47 and 4.36 respectively. Thus the level of validity and feasibility of the media was categorized as very good.

  13. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  14. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  15. Feature Selection for Audio Surveillance in Urban Environment

    Directory of Open Access Journals (Sweden)

    KIKTOVA Eva

    2014-05-01

    Full Text Available This paper presents the work leading to the acoustic event detection system, which is designed to recognize two types of acoustic events (shot and breaking glass in urban environment. For this purpose, a huge front-end processing was performed for the effective parametric representation of an input sound. MFCC features and features computed during their extraction (MELSPEC and FBANK, then MPEG-7 audio descriptors and other temporal and spectral characteristics were extracted. High dimensional feature sets were created and in the next phase reduced by the mutual information based selection algorithms. Hidden Markov Model based classifier was applied and evaluated by the Viterbi decoding algorithm. Thus very effective feature sets were identified and also the less important features were found.

  16. Deep learning, audio adversaries, and music content analysis

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    We present the concept of adversarial audio in the context of deep neural networks (DNNs) for music content analysis. An adversary is an algorithm that makes minor perturbations to an input that cause major repercussions to the system response. In particular, we design an adversary for a DNN...... that takes as input short-time spectral magnitudes of recorded music and outputs a high-level music descriptor. We demonstrate how this adversary can make the DNN behave in any way with only extremely minor changes to the music recording signal. We show that the adversary cannot be neutralised by a simple...... filtering of the input. Finally, we discuss adversaries in the broader context of the evaluation of music content analysis systems....

  17. Toward Personal and Emotional Connectivity in Mobile Higher Education through Asynchronous Formative Audio Feedback

    Science.gov (United States)

    Rasi, Päivi; Vuojärvi, Hanna

    2018-01-01

    This study aims to develop asynchronous formative audio feedback practices for mobile learning in higher education settings. The development was conducted in keeping with the principles of design-based research. The research activities focused on an inter-university online course, within which the use of instructor audio feedback was tested,…

  18. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  19. Audio-Visual Aid in Teaching "Fatty Liver"

    Science.gov (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-01-01

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  20. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  1. The Success of Free to Play Games and Possibilities of Audio Monetization

    OpenAIRE

    Hahl, Kalle

    2014-01-01

    Video games are a huge business – nearly four times greater than film and music business combined. Free to play is the fastest growing category in video gaming. Game audio is part of the development of every game having a direct correlation between the growth of gaming industry and the growth of gaming audio industry. Games have inherently different goals for the players and the developers. Players are consumers seeking for entertainment. Developers are content producers trying to moneti...

  2. Collusion-resistant audio fingerprinting system in the modulated complex lapped transform domain.

    Directory of Open Access Journals (Sweden)

    Jose Juan Garcia-Hernandez

    Full Text Available Collusion-resistant fingerprinting paradigm seems to be a practical solution to the piracy problem as it allows media owners to detect any unauthorized copy and trace it back to the dishonest users. Despite the billionaire losses in the music industry, most of the collusion-resistant fingerprinting systems are devoted to digital images and very few to audio signals. In this paper, state-of-the-art collusion-resistant fingerprinting ideas are extended to audio signals and the corresponding parameters and operation conditions are proposed. Moreover, in order to carry out fingerprint detection using just a fraction of the pirate audio clip, block-based embedding and its corresponding detector is proposed. Extensive simulations show the robustness of the proposed system against average collusion attack. Moreover, by using an efficient Fast Fourier Transform core and standard computer machines it is shown that the proposed system is suitable for real-world scenarios.

  3. A Perceptually Reweighted Mixed-Norm Method for Sparse Approximation of Audio Signals

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll; Sturm, Bob L.

    2011-01-01

    using standard software. A prominent feature of the new method is that it solves a problem that is closely related to the objective of coding, namely rate-distortion optimization. In computer simulations, we demonstrate the properties of the algorithm and its application to real audio signals.......In this paper, we consider the problem of finding sparse representations of audio signals for coding purposes. In doing so, it is of utmost importance that when only a subset of the present components of an audio signal are extracted, it is the perceptually most important ones. To this end, we...... propose a new iterative algorithm based on two principles: 1) a reweighted l1-norm based measure of sparsity; and 2) a reweighted l2-norm based measure of perceptual distortion. Using these measures, the considered problem is posed as a constrained convex optimization problem that can be solved optimally...

  4. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  5. Audio-tactile integration and the influence of musical training.

    Science.gov (United States)

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  6. Audio-tactile integration and the influence of musical training.

    Directory of Open Access Journals (Sweden)

    Anja Kuchenbuch

    Full Text Available Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  7. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  8. Audio Networking in the Music Industry

    OpenAIRE

    Glebs Kuzmics; Maaruf Ali

    2018-01-01

    This paper surveys the rôle of computer networking technologies in the music industry. A comparison of their relevant technologies, their defining advantages and disadvantages; analyses and discussion of the situation in the market of network enabled audio products followed by a discussion of different devices are presented. The idea of replacing a proprietary solution with open-source and freeware software programs has been chosen as the fundamental concept of this research. The technologies...

  9. New musical organology : the audio-games

    OpenAIRE

    Zénouda , Hervé

    2012-01-01

    International audience; This article aims to shed light on a new and emerging creative field: " Audio Games, " a crossroad between video games and computer music. Today, a plethora of tiny applications, which propose entertaining audiovisual experiences with a preponderant sound dimension, are available for game consoles, computers, and mobile phones. These experiences represent a new universe where the gameplay of video games is applied to musical composition, hence creating new links betwee...

  10. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  11. Investigating the quality of video consultations performed using fourth generation (4G) mobile telecommunications.

    Science.gov (United States)

    Caffery, Liam J; Smith, Anthony C

    2015-09-01

    The use of fourth-generation (4G) mobile telecommunications to provide real-time video consultations were investigated in this study with the aims of determining if 4G is a suitable telecommunications technology; and secondly, to identify if variation in perceived audio and video quality were due to underlying network performance. Three patient end-points that used 4G Internet connections were evaluated. Consulting clinicians recorded their perception of audio and video quality using the International Telecommunications Union scales during clinics with these patient end-points. These scores were used to calculate a mean opinion score (MOS). The network performance metrics were obtained for each session and the relationships between these metrics and the session's quality scores were tested. Clinicians scored the quality of 50 hours of video consultations, involving 36 clinic sessions. The MOS for audio was 4.1 ± 0.62 and the MOS for video was 4.4 ± 0.22. Image impairment and effort to listen were also rated favourably. There was no correlation between audio or video quality and the network metrics of packet loss or jitter. These findings suggest that 4G networks are an appropriate telecommunication technology to deliver real-time video consultations. Variations in quality scores observed during this study were not explained by the packet loss and jitter in the underlying network. Before establishing a telemedicine service, the performance of the 4G network should be assessed at the location of the proposed service. This is due to known variability in performance of 4G networks. © The Author(s) 2015.

  12. A scheme for racquet sports video analysis with the combination of audio-visual information

    Science.gov (United States)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  13. Audio teleconferencing: creative use of a forgotten innovation.

    Science.gov (United States)

    Mather, Carey; Marlow, Annette

    2012-06-01

    As part of a regional School of Nursing and Midwifery's commitment to addressing recruitment and retention issues, approximately 90% of second year undergraduate student nurses undertake clinical placements at: multipurpose centres; regional or district hospitals; aged care; or community centres based in rural and remote regions within the State. The remaining 10% undertake professional experience placement in urban areas only. This placement of a large cohort of students, in low numbers in a variety of clinical settings, initiated the need to provide consistent support to both students and staff at these facilities. Subsequently the development of an audio teleconferencing model of clinical facilitation to guide student teaching and learning and to provide support to registered nurse preceptors in clinical practice was developed. This paper draws on Weimer's 'Personal Accounts of Change' approach to describe, discuss and evaluate the modifications that have occurred since the inception of this audio teleconferencing model (Weimer, 2006).

  14. Predicting the perceived sound quality of frequency-compressed speech.

    Directory of Open Access Journals (Sweden)

    Rainer Huber

    Full Text Available The performance of objective speech and audio quality measures for the prediction of the perceived quality of frequency-compressed speech in hearing aids is investigated in this paper. A number of existing quality measures have been applied to speech signals processed by a hearing aid, which compresses speech spectra along frequency in order to make information contained in higher frequencies audible for listeners with severe high-frequency hearing loss. Quality measures were compared with subjective ratings obtained from normal hearing and hearing impaired children and adults in an earlier study. High correlations were achieved with quality measures computed by quality models that are based on the auditory model of Dau et al., namely, the measure PSM, computed by the quality model PEMO-Q; the measure qc, computed by the quality model proposed by Hansen and Kollmeier; and the linear subcomponent of the HASQI. For the prediction of quality ratings by hearing impaired listeners, extensions of some models incorporating hearing loss were implemented and shown to achieve improved prediction accuracy. Results indicate that these objective quality measures can potentially serve as tools for assisting in initial setting of frequency compression parameters.

  15. The brief fatigue inventory: comparison of data collection using a novel audio device with conventional paper questionnaire.

    Science.gov (United States)

    Pallett, Edward; Rentowl, Patricia; Hanning, Christopher

    2009-09-01

    An Electronic Portable Information Collection audio device (EPIC-Vox) has been developed to deliver questionnaires in spoken word format via headphones. Patients respond by pressing buttons on the device. The aims of this study were to determine limits of agreement between, and test-retest reliability of audio (A) and paper (P) versions of the Brief Fatigue Inventory (BFI). Two hundred sixty outpatients (204 male, mean age 55.7 years) attending a sleep disorders clinic were allocated to four groups using block randomization. All completed the BFI twice, separated by a one-minute distracter task. Half the patients completed paper and audio versions, then an evaluation questionnaire. The remainder completed either paper or audio versions to compare test-retest reliability. BFI global scores were analyzed using Bland-Altman methodology. Agreement between categorical fatigue severity scores was determined using Cohen's kappa. The mean (SD) difference between paper and audio scores was -0.04 (0.48). The limits of agreement (mean difference+/-2SD) were -0.93 to +1.00. Test-retest reliability of the paper BFI showed a mean (SD) difference of 0.17 (0.32) between first and second presentations (limits -0.46 to +0.81). For audio, the mean (SD) difference was 0.17 (0.48) (limits -0.79 to +1.14). For agreement between categorical scores, Cohen's kappa=0.73 for P and A, 0.67 (P at test and retest) and 0.87 (A at test and retest). Evaluation preferences (n=128): 36.7% audio; 18.0% paper; and 45.3% no preference. A total of 99.2% found EPIC-Vox "easy to use." These data demonstrate that the English audio version of the BFI provides an acceptable alternative to the paper questionnaire.

  16. Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.

  17. A Novel Audio Cryptosystem Using Chaotic Maps and DNA Encoding

    Directory of Open Access Journals (Sweden)

    S. J. Sheela

    2017-01-01

    Full Text Available Chaotic maps have good potential in security applications due to their inherent characteristics relevant to cryptography. This paper introduces a new audio cryptosystem based on chaotic maps, hybrid chaotic shift transform (HCST, and deoxyribonucleic acid (DNA encoding rules. The scheme uses chaotic maps such as two-dimensional modified Henon map (2D-MHM and standard map. The 2D-MHM which has sophisticated chaotic behavior for an extensive range of control parameters is used to perform HCST. DNA encoding technology is used as an auxiliary tool which enhances the security of the cryptosystem. The performance of the algorithm is evaluated for various speech signals using different encryption/decryption quality metrics. The simulation and comparison results show that the algorithm can achieve good encryption results and is able to resist several cryptographic attacks. The various types of analysis revealed that the algorithm is suitable for narrow band radio communication and real-time speech encryption applications.

  18. FPGAs Implementation of fast algorithms oriented to mp3 audio decompression

    Directory of Open Access Journals (Sweden)

    Antonio Benavides

    2012-01-01

    Full Text Available La ejecución de los algoritmos de descompresión de audio exige procesadores potentes con alto nivel de desempeño, sin embargo, dichos algoritmos no son apropiados para aplicaciones óptimas en dispositivos móviles. En este trabajo se lleva a cabo una exploración de algunos algoritmos cuya implementación en hardware permite mejorar el desempeño de los procesadores usados en dispositivos móviles que ejecutan tareas de descompresión de audio. Se presentan algunos resultados experimentales y análisis comparativos.

  19. Approaches to building single-stage AC/AC conversion switch-mode audio power amplifiers

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    This paper discusses the possible topologies and promising approaches towards direct single-phase AC-AC conversion of the mains voltage for audio applications. When compared to standard Class-D switching audio power amplifiers with a separate power supply, it is expected that direct conversion will provide better efficiency and higher level of integration, leading to lower component count, volume and cost, but at the expense of a minor performance deterioration. (au)

  20. Voice over: Audio-visual congruency and content recall in the gallery setting.

    Science.gov (United States)

    Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

  1. Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain

    Directory of Open Access Journals (Sweden)

    Shijun Xiang

    2017-01-01

    Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.

  2. Studies on a Spatialized Audio Interface for Sonar

    Science.gov (United States)

    2011-10-03

    addition of spatialized audio to visual displays for sonar is much akin to the development of talking movies in the early days of cinema and can be...than using the brute-force approach. PCA is one among several techniques that share similarities with the computational architecture of a

  3. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  4. A method for Perceptual Assessment of Automotive Audio Systems and Cabin Acoustics

    DEFF Research Database (Denmark)

    Kaplanis, Neofytos; Bech, Søren; Sakari, Tervo

    2016-01-01

    This paper reports the design and implementation of a method to perceptually assess the acoustical prop- erties of a car cabin and the subsequent sound reproduction properties of automotive audio systems. Here, we combine Spatial Decomposition Method and Rapid Sensory Analysis techniques. The for......This paper reports the design and implementation of a method to perceptually assess the acoustical prop- erties of a car cabin and the subsequent sound reproduction properties of automotive audio systems. Here, we combine Spatial Decomposition Method and Rapid Sensory Analysis techniques...

  5. The Effects of Audio-Visual Recorded and Audio Recorded Listening Tasks on the Accuracy of Iranian EFL Learners' Oral Production

    Science.gov (United States)

    Drood, Pooya; Asl, Hanieh Davatgari

    2016-01-01

    The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…

  6. Design of low power and low area passive sigma delta modulators for audio applications

    CERN Document Server

    Fouto, David

    2017-01-01

    This book presents the study, design, modulation, optimization and implementation of low power, passive DT-ΣΔMs for use in audio applications. The high gain and bandwidth amplifier normally used for integration in ΣΔ modulation, is replaced by passive, switched-capacitor branches working under the Ultra Incomplete Settling (UIS) condition, leading to a reduction of the consumed power. The authors describe a design process that uses high level models and an optimization process based in genetic algorithms to achieve the desired performance.

  7. Evaluation of an Audio Cassette Tape Lecture Course

    Science.gov (United States)

    Blank, Jerome W.

    1975-01-01

    An audio-cassette continuing education course (Selected Topics in Pharmacology) from Extension Services in Pharmacy at the University of Wisconsin was offered to a selected test market of pharmacists and evaluated using a pre-, post-test design. Results showed significant increase in cognitive knowledge and strong approval of students. (JT)

  8. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  9. Effects of a Theory-Based Audio HIV/AIDS Intervention for Illiterate Rural Females in Amhara, Ethiopia

    Science.gov (United States)

    Bogale, Gebeyehu W.; Boer, Henk; Seydel, Erwin R.

    2011-01-01

    In Ethiopia the level of illiteracy in rural areas is very high. In this study, we investigated the effects of an audio HIV/AIDS prevention intervention targeted at rural illiterate females. In the intervention we used social-oriented presentation formats, such as discussion between similar females and role-play. In a pretest and posttest…

  10. Advances in preventive monitoring of machinery through audio and vibration signals

    OpenAIRE

    Henríquez Rodríguez, Patricia

    2016-01-01

    Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería. La fecha de publicación es la fecha de lectura. El objetivo de la presente Tesis es la mejora de los sistemas de monitorización de maquinaria en diagnóstico e identificación de fallo usando señales de vibración y de audio en dos aplicaciones (cojinetes y bombas centrífugas) con especial énfasis en la etapa de extracción de características y en la utilización del audio como fuente de información. En el caso...

  11. A Preliminary Investigation into the Search Behaviour of Users in a Collection of Digitized Broadcast Audio

    DEFF Research Database (Denmark)

    Lund, Haakon; Skov, Mette; Larsen, Birger

    2014-01-01

    An increasing number of large digitized audio-visual collections within digital humanities have recently been made available for users. Often access to digitized audio-visual collections is hampered by little and inconsistent metadata. This paper presents the preliminary findings from a study of ...

  12. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Audio-vestibular signs and symptoms in Chiari malformation type i. Case series and literature review.

    Science.gov (United States)

    Guerra Jiménez, Gloria; Mazón Gutiérrez, Ángel; Marco de Lucas, Enrique; Valle San Román, Natalia; Martín Laez, Rubén; Morales Angulo, Carmelo

    2015-01-01

    Chiari malformation is an alteration of the base of the skull with herniation through the foramen magnum of the brain stem and cerebellum. Although the most common presentation is occipital headache, the association of audio-vestibular symptoms is not rare. The aim of our study was to describe audio-vestibular signs and symptoms in Chiari malformation type i (CM-I). We performed a retrospective observational study of patients referred to our unit during the last 5 years. We also carried out a literature review of audio-vestibular signs and symptoms in this disease. There were 9 patients (2 males and 7 females), with an average age of 42.8 years. Five patients presented a Ménière-like syndrome; 2 cases, a recurrent vertigo with peripheral features; one patient showed a sudden hearing loss; and one case suffered a sensorineural hearing loss with early childhood onset. The most common audio-vestibular symptom indicated in the literature in patients with CM-I is unsteadiness (49%), followed by dizziness (18%), nystagmus (15%) and hearing loss (15%). Nystagmus is frequently horizontal (74%) or down-beating (18%). Other audio-vestibular signs and symptoms are tinnitus (11%), aural fullness (10%) and hyperacusis (1%). Occipital headache that increases with Valsalva manoeuvres and hand paresthesias are very suggestive symptoms. The appearance of audio-vestibular manifestations in CM-I makes it common to refer these patients to neurotologists. Unsteadiness, vertiginous syndromes and sensorineural hearing loss are frequent. Nystagmus, especially horizontal and down-beating, is not rare. It is important for neurotologists to familiarise themselves with CM-I symptoms to be able to consider it in differential diagnosis. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  14. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  15. acceleration observed in an audio air gas discharge

    International Nuclear Information System (INIS)

    Ragheb, M.S.

    2010-01-01

    an audio air gas discharge enclosed in a pyrex glass of 34 mm diameter and 25 cm long , lead to trace the occurrence of an unusual phenomenon. injected relative huge light spots of intense brightness, distributed regularly on the contour and in the center of one of the discharge electrodes, are observed. very high heat is pronounced on both electrodes, while, one of them is higher than the other it attains 660 degree C in 3-4 minutes. series of photographs and registered video films define and clarify the sequence of events that describe the observed phenomenon. the plasma is created by applying an audio power through the electrodes of an air gas discharge of 10 khz and up to 500 watts power supply. the discharge voltage is up to 900 volts: the discharge current flowing through the plasma attains 360 mA. it is found that the discharge system must attain its optimal working conditions in order to produce the amazing phenomena. the obtained plasma is classified as the maximum conditions borders of a γ-discharge type. at these conditions, the corresponding maximum electron temperature and density are 16 eV and 10 15 cm -3 respectively . the observation system succeeded to reveal and to clarify the sequence of the phenomenon events. in addition, by means of the scanning electron microscope and the energy dispersive x- ray systems, the effects on the electrodes surface are investigated and analyzed. the optical observations, in conjunction with the micrograph and surface microanalysis,demonstrate the collision occurrence, of powered agglomerations groups, to the electrode surface. detailed interpretation of that phenomenon suggests a molecular acceleration gaining their energy from the formed plasma due to optimal discharge working conditions. as a consequence, due to the ions agglomerates size this procedure could be considered as a mesoscopic acceleration technique.

  16. Cepstral domain modification of audio signals for data embedding: preliminary results

    Science.gov (United States)

    Gopalan, Kaliappan

    2004-06-01

    A method of embedding data in an audio signal using cepstral domain modification is described. Based on successful embedding in the spectral points of perceptually masked regions in each frame of speech, first the technique was extended to embedding in the log spectral domain. This extension resulted at approximately 62 bits /s of embedding with less than 2 percent of bit error rate (BER) for a clean cover speech (from the TIMIT database), and about 2.5 percent for a noisy speech (from an air traffic controller database), when all frames - including silence and transition between voiced and unvoiced segments - were used. Bit error rate increased significantly when the log spectrum in the vicinity of a formant was modified. In the next procedure, embedding by altering the mean cepstral values of two ranges of indices was studied. Tests on both a noisy utterance and a clean utterance indicated barely noticeable perceptual change in speech quality when lower range of cepstral indices - corresponding to vocal tract region - was modified in accordance with data. With an embedding capacity of approximately 62 bits/s - using one bit per each frame regardless of frame energy or type of speech - initial results showed a BER of less than 1.5 percent for a payload capacity of 208 embedded bits using the clean cover speech. BER of less than 1.3 percent resulted for the noisy host with a capacity was 316 bits. When the cepstrum was modified in the region of excitation, BER increased to over 10 percent. With quantization causing no significant problem, the technique warrants further studies with different cepstral ranges and sizes. Pitch-synchronous cepstrum modification, for example, may be more robust to attacks. In addition, cepstrum modification in regions of speech that are perceptually masked - analogous to embedding in frequency masked regions - may yield imperceptible stego audio with low BER.

  17. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  18. Weaving together peer assessment, audios and medical vignettes in teaching medical terms

    Science.gov (United States)

    Khan, Lateef M.

    2015-01-01

    Objectives The current study aims at exploring the possibility of aligning peer assessment, audiovisuals, and medical case-report extracts (vignettes) in medical terminology teaching. In addition, the study wishes to highlight the effectiveness of audio materials and medical history vignettes in preventing medical students' comprehension, listening, writing, and pronunciation errors. The study also aims at reflecting the medical students' attitudes towards the teaching and learning process. Methods The study involved 161 medical students who received an intensive medical terminology course through audio and medical history extracts. Peer assessment and formative assessment platforms were applied through fake quizzes in a pre- and post-test manner. An 18-item survey was distributed amongst students to investigate their attitudes and feedback towards the teaching and learning process. Quantitative and qualitative data were analysed using the SPSS software. Results The students did better in the posttests than on the pretests for both the quizzes of audios and medical vignettes showing a t-test of -12.09 and -13.60 respectively. Moreover, out of the 133 students, 120 students (90.22%) responded to the survey questions. The students gave positive attitudes towards the application of audios and vignettes in the teaching and learning of medical terminology and towards the learning process. Conclusions The current study revealed that the teaching and learning of medical terminology have more room for the application of advanced technologies, effective assessment platforms, and active learning strategies in higher education. It also highlights that students are capable of carrying more responsibilities of assessment, feedback, and e-learning. PMID:26637986

  19. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  20. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  1. Derivation and Analysis of a Low-Cost, High-performance Analogue BPCM Control Scheme for Class-D Audio Power Amplifiers

    OpenAIRE

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael A. E.

    2005-01-01

    This paper presents a low-cost analogue control scheme for class-D audio power amplifiers. The scheme is based around bandpass current-mode (BPCM) control, and provides ample stability margins and low distortion over a wide range of operating conditions. Implementation is very simple and does not require the use of operational amplifiers. Small-signal behavior of the controller is accurately predicted, and design is carried out using standard transfer function based linear control methodology...

  2. Modeling and Evaluation of Multimodal Perceptual Quality

    DEFF Research Database (Denmark)

    Petersen, Kim T; Hansen, Steffen Duus; Sørensen, John Aasted

    1997-01-01

    The increasing performance requirements of multimedia modalities, carrying speech, audio, video, image, and graphics emphasize the need for assessment methods of the total quality of a multimedia system and methods for simultaneous analysis of the system components. It is important to take...

  3. Teacher’s Voice on Metacognitive Strategy Based Instruction Using Audio Visual Aids for Listening

    Directory of Open Access Journals (Sweden)

    Salasiah Salasiah

    2018-02-01

    Full Text Available The paper primarily stresses on exploring the teacher’s voice toward the application of metacognitive strategy with audio-visual aid in improving listening comprehension. The metacognitive strategy model applied in the study was inspired from Vandergrift and Tafaghodtari (2010 instructional model. Thus it is modified in the procedure and applied with audio-visual aids for improving listening comprehension. The study’s setting was at SMA Negeri 2 Parepare, South Sulawesi Province, Indonesia. The population of the research was the teacher of English at tenth grade at SMAN 2. The sample was taken by using random sampling technique. The data was collected by using in depth interview during the research, recorded, and analyzed using qualitative analysis. This study explored the teacher’s response toward the modified model of metacognitive strategy with audio visual aids in class of listening which covers positive and negative response toward the strategy applied during the teaching of listening. The result of data showed that this strategy helped the teacher a lot in teaching listening comprehension as the procedure has systematic steps toward students’ listening comprehension. Also, it eases the teacher to teach listening by empowering audio visual aids such as video taken from youtube.

  4. Quality-aware media scheduling on MPSoC platforms

    DEFF Research Database (Denmark)

    Gangadharan, Deepak; Chakraborty, Samarjit; Zimmermann, Roger

    2013-01-01

    and personal computers by providing maximum quality of service to the multiple streams, it is a difficult task in devices with resource constraints. In order to efficiently utilize the resources, it is essential to derive the necessary processor cycles for multiple video streams such that they are displayed...... be scheduled such that a prespecified quality constraint is satisfied with the available service. We present this framework in the context of a PiP application, but it is applicable in general for multiple media streams. The results obtained using the formal framework were further verified using experiments......Applications that stream multiple video/audio or video+audio clips are being implemented in embedded devices. A Picture-in-Picture (PiP) application is one such application scenario, where two videos are played simultaneously. Although the PiP application is very efficiently handled in televisions...

  5. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  6. CREATING AUDIO VISUAL DIALOGUE TASK AS STUDENTS’ SELF ASSESSMENT TO ENHANCE THEIR SPEAKING ABILITY

    Directory of Open Access Journals (Sweden)

    Novia Trisanti

    2017-04-01

    Full Text Available The study is about giving overview of employing audio visual dialogue task as students creativity task and self assessment in EFL speaking class of tertiary education to enhance the students speaking ability. The qualitative research was done in one of the speaking classes at English Department, Semarang State University, Central Java, Indonesia. The results that can be seen from the rubric of self assessment show that the oral performance through audio visual recorded tasks done by the students as their self assessment gave positive evidences. The audio visual dialogue task can be very beneficial since it can motivate the students learning and increase their learning experiences. The self-assessment can be a valuable additional means to improve their speaking ability since it is one of the motives that drive self- evaluatioan, along with self- verification and self- enhancement.

  7. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  8. Zerodur polishing process for high surface quality and high efficiency

    International Nuclear Information System (INIS)

    Tesar, A.; Fuchs, B.

    1992-08-01

    Zerodur is a glass-ceramic composite importance in applications where temperature instabilities influence optical and mechanical performance, such as in earthbound and spaceborne telescope mirror substrates. Polished Zerodur surfaces of high quality have been required for laser gyro mirrors. Polished surface quality of substrates affects performance of high reflection coatings. Thus, the interest in improving Zerodur polished surface quality has become more general. Beyond eliminating subsurface damage, high quality surfaces are produced by reducing the amount of hydrated material redeposited on the surface during polishing. With the proper control of polishing parameters, such surfaces exhibit roughnesses of < l Angstrom rms. Zerodur polishing was studied to recommend a high surface quality polishing process which could be easily adapted to standard planetary continuous polishing machines and spindles. This summary contains information on a polishing process developed at LLNL which reproducibly provides high quality polished Zerodur surfaces at very high polishing efficiencies

  9. A 115dB-DR Audio DAC with –61dBFS out-of-band noise

    NARCIS (Netherlands)

    Westerveld, H.J.; Schinkel, Daniel; van Tuijl, Adrianus Johannes Maria

    2015-01-01

    Out-of-band noise (OBN) is troublesome in analog circuits that process the output of a noise-shaping audio DAC. It causes slewing in amplifiers and aliasing in sampling circuits like ADCs and class-D amplifiers. Nonlinearity in these circuits also causes cross-modulation of the OBN into the audio

  10. Correspondence between audio and visual deep models for musical instrument detection in video recordings

    OpenAIRE

    Slizovskaia, Olga; Gómez, Emilia; Haro, Gloria

    2017-01-01

    This work aims at investigating cross-modal connections between audio and video sources in the task of musical instrument recognition. We also address in this work the understanding of the representations learned by convolutional neural networks (CNNs) and we study feature correspondence between audio and visual components of a multimodal CNN architecture. For each instrument category, we select the most activated neurons and investigate exist- ing cross-correlations between neurons from the ...

  11. [Quality management is associated with high quality services in health care].

    Science.gov (United States)

    Nielsen, Tenna Hassert; Riis, Allan; Mainz, Jan; Jensen, Anne-Louise Degn

    2013-12-09

    In these years, quality management has been the focus in order to meet high quality services for the patients in Danish health care. This article provides information on quality management and quality improvement and it evaluates its effectiveness in achieving better organizational structures, processes and results in Danish health-care organizations. Our findings generally support that quality management is associated with high quality services in health care.

  12. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  13. On Modeling Affect in Audio with Non-Linear Symbolic Dynamics

    Directory of Open Access Journals (Sweden)

    Pauline Mouawad

    2017-09-01

    Full Text Available The discovery of semantic information from complex signals is a task concerned with connecting humans’ perceptions and/or intentions with the signals content. In the case of audio signals, complex perceptions are appraised in a listener’s mind, that trigger affective responses that may be relevant for well-being and survival. In this paper we are interested in the broader question of relations between uncertainty in data as measured using various information criteria and emotions, and we propose a novel method that combines nonlinear dynamics analysis with a method of adaptive time series symbolization that finds the meaningful audio structure in terms of symbolized recurrence properties. In a first phase we obtain symbolic recurrence quantification measures from symbolic recurrence plots, without the need to reconstruct the phase space with embedding. Then we estimate symbolic dynamical invariants from symbolized time series, after embedding. The invariants are: correlation dimension, correlation entropy and Lyapunov exponent. Through their application for the logistic map, we show that our measures are in agreement with known methods from literature. We further show that one symbolic recurrence measure, namely the symbolic Shannon entropy, correlates positively with the positive Lyapunov exponents. Finally we evaluate the performance of our measures in emotion recognition through the implementation of classification tasks for different types of audio signals, and show that in some cases, they perform better than state-of-the-art methods that rely on low-level acoustic features.

  14. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-06-01

    This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  15. An Introduction to Boiler Water Chemistry for the Marine Engineer: A Text of Audio-Tutorial Instruction.

    Science.gov (United States)

    Schlenker, Richard M.; And Others

    Presented is a manuscript for an introductory boiler water chemistry course for marine engineer education. The course is modular, self-paced, audio-tutorial, contract graded and combined lecture-laboratory instructed. Lectures are presented to students individually via audio-tapes and 35 mm slides. The course consists of a total of 17 modules -…

  16. Noise-Canceling Helmet Audio System

    Science.gov (United States)

    Seibert, Marc A.; Culotta, Anthony J.

    2007-01-01

    A prototype helmet audio system has been developed to improve voice communication for the wearer in a noisy environment. The system was originally intended to be used in a space suit, wherein noise generated by airflow of the spacesuit life-support system can make it difficult for remote listeners to understand the astronaut s speech and can interfere with the astronaut s attempt to issue vocal commands to a voice-controlled robot. The system could be adapted to terrestrial use in helmets of protective suits that are typically worn in noisy settings: examples include biohazard, fire, rescue, and diving suits. The system (see figure) includes an array of microphones and small loudspeakers mounted at fixed positions in a helmet, amplifiers and signal-routing circuitry, and a commercial digital signal processor (DSP). Notwithstanding the fixed positions of the microphones and loudspeakers, the system can accommodate itself to any normal motion of the wearer s head within the helmet. The system operates in conjunction with a radio transceiver. An audio signal arriving via the transceiver intended to be heard by the wearer is adjusted in volume and otherwise conditioned and sent to the loudspeakers. The wearer s speech is collected by the microphones, the outputs of which are logically combined (phased) so as to form a microphone- array directional sensitivity pattern that discriminates in favor of sounds coming from vicinity of the wearer s mouth and against sounds coming from elsewhere. In the DSP, digitized samples of the microphone outputs are processed to filter out airflow noise and to eliminate feedback from the loudspeakers to the microphones. The resulting conditioned version of the wearer s speech signal is sent to the transceiver.

  17. The use of ambient audio to increase safety and immersion in location-based games

    Science.gov (United States)

    Kurczak, John Jason

    The purpose of this thesis is to propose an alternative type of interface for mobile software being used while walking or running. Our work addresses the problem of visual user interfaces for mobile software be- ing potentially unsafe for pedestrians, and not being very immersive when used for location-based games. In addition, location-based games and applications can be dif- ficult to develop when directly interfacing with the sensors used to track the user's location. These problems need to be addressed because portable computing devices are be- coming a popular tool for navigation, playing games, and accessing the internet while walking. This poses a safety problem for mobile users, who may be paying too much attention to their device to notice and react to hazards in their environment. The difficulty of developing location-based games and other location-aware applications may significantly hinder the prevalence of applications that explore new interaction techniques for ubiquitous computing. We created the TREC toolkit to address the issues with tracking sensors while developing location-based games and applications. We have developed functional location-based applications with TREC to demonstrate the amount of work that can be saved by using this toolkit. In order to have a safer and more immersive alternative to visual interfaces, we have developed ambient audio interfaces for use with mobile applications. Ambient audio uses continuous streams of sound over headphones to present information to mobile users without distracting them from walking safely. In order to test the effectiveness of ambient audio, we ran a study to compare ambient audio with handheld visual interfaces in a location-based game. We compared players' ability to safely navigate the environment, their sense of immersion in the game, and their performance at the in-game tasks. We found that ambient audio was able to significantly increase players' safety and sense of immersion compared to a

  18. An Audio Architecture Integrating Sound and Live Voice for Virtual Environments

    National Research Council Canada - National Science Library

    Krebs, Eric

    2002-01-01

    The purpose behind this thesis was to design and implement audio system architecture, both in hardware and in software, for use in virtual environments The hardware and software design requirements...

  19. Big Data Analytics: Challenges And Applications For Text, Audio, Video, And Social Media Data

    OpenAIRE

    Jai Prakash Verma; Smita Agrawal; Bankim Patel; Atul Patel

    2016-01-01

    All types of machine automated systems are generating large amount of data in different forms like statistical, text, audio, video, sensor, and bio-metric data that emerges the term Big Data. In this paper we are discussing issues, challenges, and application of these types of Big Data with the consideration of big data dimensions. Here we are discussing social media data analytics, content based analytics, text data analytics, audio, and video data analytics their issues and expected applica...

  20. Hierarchical vs non-hierarchical audio indexation and classification for video genres

    Science.gov (United States)

    Dammak, Nouha; BenAyed, Yassine

    2018-04-01

    In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.

  1. Assessment of rural soundscapes with high-speed train noise.

    Science.gov (United States)

    Lee, Pyoung Jik; Hong, Joo Young; Jeon, Jin Yong

    2014-06-01

    In the present study, rural soundscapes with high-speed train noise were assessed through laboratory experiments. A total of ten sites with varying landscape metrics were chosen for audio-visual recording. The acoustical characteristics of the high-speed train noise were analyzed using various noise level indices. Landscape metrics such as the percentage of natural features (NF) and Shannon's diversity index (SHDI) were adopted to evaluate the landscape features of the ten sites. Laboratory experiments were then performed with 20 well-trained listeners to investigate the perception of high-speed train noise in rural areas. The experiments consisted of three parts: 1) visual-only condition, 2) audio-only condition, and 3) combined audio-visual condition. The results showed that subjects' preference for visual images was significantly related to NF, the number of land types, and the A-weighted equivalent sound pressure level (LAeq). In addition, the visual images significantly influenced the noise annoyance, and LAeq and NF were the dominant factors affecting the annoyance from high-speed train noise in the combined audio-visual condition. In addition, Zwicker's loudness (N) was highly correlated with the annoyance from high-speed train noise in both the audio-only and audio-visual conditions. © 2013.

  2. Audio Haptic Videogaming for Developing Wayfinding Skills in Learners Who are Blind.

    Science.gov (United States)

    Sánchez, Jaime; de Borba Campos, Marcia; Espinoza, Matías; Merabet, Lotfi B

    2014-01-01

    Interactive digital technologies are currently being developed as a novel tool for education and skill development. Audiopolis is an audio and haptic based videogame designed for developing orientation and mobility (O&M) skills in people who are blind. We have evaluated the cognitive impact of videogame play on O&M skills by assessing performance on a series of behavioral tasks carried out in both indoor and outdoor virtual spaces. Our results demonstrate that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged learners who are blind. The impact of audio and haptic information on learning is also discussed.

  3. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-12-01

    Full Text Available This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  4. Implementation and Analysis Audio Steganography Used Parity Coding for Symmetric Cryptography Key Delivery

    Directory of Open Access Journals (Sweden)

    Afany Zeinata Firdaus

    2013-12-01

    Full Text Available In today's era of communication, online data transactions is increasing. Various information even more accessible, both upload and download. Because it takes a capable security system. Blowfish cryptographic equipped with Audio Steganography is one way to secure the data so that the data can not be accessed by unauthorized parties. In this study Audio Steganography technique is implemented using parity coding method that is used to send the key cryptography blowfish in e-commerce applications based on Android. The results obtained for the average computation time on stage insertion (embedding the secret message is shorter than the average computation time making phase (extracting the secret message. From the test results can also be seen that the more the number of characters pasted the greater the noise received, where the highest SNR is obtained when a character is inserted as many as 506 characters is equal to 11.9905 dB, while the lowest SNR obtained when a character is inserted as many as 2006 characters at 5,6897 dB . Keywords: audio steganograph, parity coding, embedding, extractin, cryptography blowfih.

  5. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators.

    Science.gov (United States)

    Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P

    2017-04-01

    A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.

  6. Mixing audio concepts, practices and tools

    CERN Document Server

    Izhaki, Roey

    2013-01-01

    Your mix can make or break a record, and mixing is an essential catalyst for a record deal. Professional engineers with exceptional mixing skills can earn vast amounts of money and find that they are in demand by the biggest acts. To develop such skills, you need to master both the art and science of mixing. The new edition of this bestselling book offers all you need to know and put into practice in order to improve your mixes. Covering the entire process --from fundamental concepts to advanced techniques -- and offering a multitude of audio samples, tips and tricks, this boo

  7. Transmisión de audio usando redes Zigbee

    Directory of Open Access Journals (Sweden)

    David Delgado León

    2011-03-01

    Full Text Available Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Zigbee es un protocolo de comunicaciones basado en el estándar para redes inalámbricas IEEE_802.15.4. Concebido para el control y la monitorización de redes de sensores tanto en entornos industriales, médicos, como domóticos, ha existido un creciente interés por evaluarlo en aplicaciones de multimedia. Aun sin garantizar QoS (Quality of service por su limitado ancho de banda existen un conjunto de aplicaciones para vigilancia, grupos de rescate y salvamento, seguridad en entornos domóticos, grupos desplegados en un área limitada con necesidad de comunicación donde un sistema de audio y video en tiempo real de bajo costo basado en tecnología Zigbee es una idea sumamente atractiva.   Se presenta el diseño de un sistema que permita la comunicación de un grupo de usuarios desplegadas en un área limitada. Utiliza Microcontroladores RISC y tecnología Zigbee. Se investiga la factibilidad de usar la tecnología Zigbee para la transmisión de audio, se analizan

  8. Audio Visual Media Components in Educational Game for Elementary Students

    Directory of Open Access Journals (Sweden)

    Meilani Hartono

    2016-12-01

    Full Text Available The purpose of this research was to review and implement interactive audio visual media used in an educational game to improve elementary students’ interest in learning mathematics. The game was developed for desktop platform. The art of the game was set as 2D cartoon art with animation and audio in order to make students more interest. There were four mini games developed based on the researches on mathematics study. Development method used was Multimedia Development Life Cycle (MDLC that consists of requirement, design, development, testing, and implementation phase. Data collection methods used are questionnaire, literature study, and interview. The conclusion is elementary students interest with educational game that has fun and active (moving objects, with fast tempo of music, and carefree color like blue. This educational game is hoped to be an alternative teaching tool combined with conventional teaching method.

  9. Carrier Distortion in Hysteretic Self-Oscillating Class-D Audio Power:Amplifiers: Analysis and Optimization

    OpenAIRE

    Høyerby, Mikkel Christian Kofod; Andersen, Michael A. E.

    2009-01-01

    An important distortion mechanism in hysteretic self-oscillating (SO) class-D (switch mode) power amplifiers-–carrier distortion-–is analyzed and an optimization method is proposed. This mechanism is an issue in any power amplifier application where a high degree of proportionality between input and output is required, such as in audio power amplifiers or xDSL drivers. From an average-mode point of view, carrier distortion is shown to be caused by nonlinear variation of the hysteretic compara...

  10. Publication and Distribution of Chinese Standard and Quality Books

    Institute of Scientific and Technical Information of China (English)

    Bai Demei

    2010-01-01

    @@ 1.A Brief Introduction to SPC Standards Press of China (SPC),founded in October,1963,is the largest publisher of standard and quality products in China.It publishes national standards,trade standards,and audio,video,electronic,and network products.

  11. A Novel Chewing Detection System Based on PPG, Audio, and Accelerometry.

    Science.gov (United States)

    Papapanagiotou, Vasileios; Diou, Christos; Zhou, Lingchuan; van den Boer, Janet; Mars, Monica; Delopoulos, Anastasios

    2017-05-01

    In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smartphones, can be used to robustly extract objective and real-time measurements of human behavior. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this paper, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semifree living dataset of 14 subjects, which includes more than 60 h of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.

  12. The Library of Congress: Evaluation of the NLS/BPH Braille and Audio Magazine Program. Final Project Report.

    Science.gov (United States)

    Bosma and Associates International, Seattle, WA.

    This final report presents an independent formative and summative evaluation of the National Library Services for the Blind and Physically Handicapped (NLS/BPH) braille and audio magazine program. In this program, 77 magazines are distributed directly to subscribers, with 43 magazines available on audio flexible discs and 34 magazines available in…

  13. Theory and Application of Audio-Based Assessment of Cough

    Directory of Open Access Journals (Sweden)

    Yan Shi

    2018-01-01

    Full Text Available Cough is a common symptom of many respiratory diseases. Many medical literatures underline that a system for the automatic, objective, and reliable detection of cough events is important and very promising to detect pathology severity in chronic cough disease. In order to track the development status of an audio-based cough monitoring system, we briefly described the history of objective cough detection and then illustrated the cough sound generating principle. The probable endpoints of cough clinical studies, including cough frequency, intensity of coughing, and acoustic properties of cough sound, were analyzed in this paper. Finally, we introduce some successful cough monitoring equipment and their recognition algorithm in detail. It can be obtained that, firstly, acoustic variability of cough sounds within and between individuals makes it difficult to assess the intensity of coughing. Furthermore, now great progress in audio-based cough detection is being made. Moreover, accurate portable objective monitoring systems will be available and widely used in home care and clinical trials in the near future.

  14. Automatic processing of CERN video, audio and photo archives

    International Nuclear Information System (INIS)

    Kwiatek, M

    2008-01-01

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services

  15. Automatic processing of CERN video, audio and photo archives

    Energy Technology Data Exchange (ETDEWEB)

    Kwiatek, M [CERN, Geneva (Switzerland)], E-mail: Michal.Kwiatek@cem.ch

    2008-07-15

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  16. Can audio recording of outpatient consultations improve patients recall and understanding?

    DEFF Research Database (Denmark)

    Wolderslund, Maiken

    of the dialogue between the patient and the clinician via the telephone in the consultation room. By dialing a dedicated number, patients can get access to an audio recording of their consultation by entering their social security number along with a PIN. The primary objective of this study is to determine......Introduction Information provided in an outpatient consultation concerns medication, diagnostic tests, treatment and rehabilitation, all of which are crucial knowledge with regards to patient compliance, decision making and general patient satisfaction. Despite good communication skills among...... clinicians, the communication is challenged by the fact that patients tend to forget or misunderstand parts of the information given. Thus we have designed a study which gives the patients a possibility to hear their consultation again. An Interactive Voice Response platform enables an audio recording...

  17. A Power Efficient Audio Amplifier Combining Switching and Linear Techniques

    NARCIS (Netherlands)

    van der Zee, Ronan A.R.; van Tuijl, Adrianus Johannes Maria

    1998-01-01

    Integrated Class D audio amplifiers are very power efficient, but require an external filter which prevents further integration. Also due to this filter, large feedback factors are hard to realise, so that the load influences the distortion- and transfer characteristics. The amplifier presented in

  18. A power-efficient audio amplifier combining switching and linear techniques

    NARCIS (Netherlands)

    van der Zee, Ronan A.R.; van Tuijl, Adrianus Johannes Maria

    1999-01-01

    Integrated class-D audio amplifiers are very power efficient but require an external LC reconstruction filter, which prevents further integration. Also due to this filter, large feedback factors are hard to realize, so that the load influences the distortion and transfer characteristics. The 30 W

  19. A conceptual framework for the design and analysis of first-person shooter audio and its potential use for game engines

    DEFF Research Database (Denmark)

    Grimshaw, Mark Nicholas; Schott, Gareth

    2007-01-01

    We introduce and describe a new conceptual framework for the design and analysis of audio for immersive first-person shooter games, and discuss its potential implications for the development of the audio component of game engines. The framework was created in order to illustrate and acknowledge...... the direct role of in-game audio in shaping player-player interactions and in creating a sense of immersion in the game world. Furthermore, it is argued that the relationship between player and sound is best conceptualized theoretically as an acoustic ecology. Current game engines are capable of game world...... spatiality through acoustic shading, but the ideas presented here provide a framework to explore other immersive possibilities for game audio through realtime synthesis....

  20. Use of Audio Cuing to Expand Employment Opportunities for Adolescents with Autism Spectrum Disorders and Intellectual Disabilities

    Science.gov (United States)

    Allen, Keith D.; Burke, Raymond V.; Howard, Monica R.; Wallace, Dustin P.; Bowen, Scott L.

    2012-01-01

    We evaluated audio cuing to facilitate community employment of individuals with autism and intellectual disability. The job required promoting products in retail stores by wearing an air-inflated WalkAround[R] costume of a popular commercial character. Three adolescents, ages 16-18, were initially trained with video modeling. Audio cuing was then…

  1. Towards Structural Analysis of Audio Recordings in the Presence of Musical Variations

    Directory of Open Access Journals (Sweden)

    Müller Meinard

    2007-01-01

    Full Text Available One major goal of structural analysis of an audio recording is to automatically extract the repetitive structure or, more generally, the musical form of the underlying piece of music. Recent approaches to this problem work well for music, where the repetitions largely agree with respect to instrumentation and tempo, as is typically the case for popular music. For other classes of music such as Western classical music, however, musically similar audio segments may exhibit significant variations in parameters such as dynamics, timbre, execution of note groups, modulation, articulation, and tempo progression. In this paper, we propose a robust and efficient algorithm for audio structure analysis, which allows to identify musically similar segments even in the presence of large variations in these parameters. To account for such variations, our main idea is to incorporate invariance at various levels simultaneously: we design a new type of statistical features to absorb microvariations, introduce an enhanced local distance measure to account for local variations, and describe a new strategy for structure extraction that can cope with the global variations. Our experimental results with classical and popular music show that our algorithm performs successfully even in the presence of significant musical variations.

  2. The Build-Up Course of Visuo-Motor and Audio-Motor Temporal Recalibration

    Directory of Open Access Journals (Sweden)

    Yoshimori Sugano

    2011-10-01

    Full Text Available The sensorimotor timing is recalibrated after a brief exposure to a delayed feedback of voluntary actions (temporal recalibration effect: TRE (Heron et al., 2009; Stetson et al., 2006; Sugano et al., 2010. We introduce a new paradigm, namely ‘synchronous tapping’ (ST which allows us to investigate how the TRE builds up during adaptation. In each experimental trial, participants were repeatedly exposed to a constant lag (∼150 ms between their voluntary action (pressing a mouse and a feedback stimulus (a visual flash / an auditory click 10 times. Immediately after that, they performed a ST task with the same stimulus as a pace signal (7 flashes / clicks. A subjective ‘no-delay condition’ (∼50 ms served as control. The TRE manifested itself as a change in the tap-stimulus asynchrony that compensated the exposed lag (eg, after lag adaptation, the tap preceded the stimulus more than in control and built up quickly (∼3–6 trials, ∼23–45 sec in both the visuo- and audio-motor domain. The audio-motor TRE was bigger and built-up faster than the visuo-motor one. To conclude, the TRE is comparable between visuo- and audio-motor domain, though they are slightly different in size and build-up rate.

  3. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  4. Towards a universal representation for audio information retrieval and analysis

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Troelsgaard, Rasmus; Larsen, Jan

    2013-01-01

    A fundamental and general representation of audio and music which integrates multi-modal data sources is important for both application and basic research purposes. In this paper we address this challenge by proposing a multi-modal version of the Latent Dirichlet Allocation model which provides a...

  5. Haptic and Visual feedback in 3D Audio Mixing Interfaces

    DEFF Research Database (Denmark)

    Gelineck, Steven; Overholt, Daniel

    2015-01-01

    This paper describes the implementation and informal evaluation of a user interface that explores haptic feedback for 3D audio mixing. The implementation compares different approaches using either the LEAP Motion for mid-air hand gesture control, or the Novint Falcon for active haptic feed- back...

  6. The sweet-home project: audio technology in smart homes to improve well-being and reliance.

    Science.gov (United States)

    Vacher, Michel; Istrate, Dan; Portet, François; Joubert, Thierry; Chevalier, Thierry; Smidtas, Serge; Meillon, Brigitte; Lecouteux, Benjamin; Sehili, Mohamed; Chahuara, Pedro; Méniard, Sylvain

    2011-01-01

    The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the multimodal sound corpus acquisition and labelling and on the investigated techniques for speech and sound recognition. The user study and the recognition performances show the interest of this audio technology.

  7. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  8. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  9. Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

    Directory of Open Access Journals (Sweden)

    Marko Helén

    2010-01-01

    Full Text Available This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability density functions (pdfs of their frame-wise acoustic features. Since the features are continuous valued, we propose to model them using Gaussian mixture models (GMMs or hidden Markov models (HMMs. The models parametrize each sample efficiently and retain sufficient information for similarity measurement. To measure the distance between the models, we apply a novel Euclidean distance, approximations of Kullback-Leibler divergence, and a cross-likelihood ratio test. The performance of the measures was tested in simulations where audio samples are automatically retrieved from a general audio database, based on the estimated similarity to a user-provided example. The simulations show that the distance between probability density functions is an accurate measure for similarity. Measures based on GMMs or HMMs are shown to produce better results than that of the existing methods based on simpler statistics or histograms of the features. A good performance with low computational cost is obtained with the proposed Euclidean distance.

  10. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  11. DOA and Pitch Estimation of Audio Sources using IAA-based Filtering

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    For decades, it has been investigated how to separately solve the problems of both direction-of-arrival (DOA) and pitch estimation. Recently, it was found that estimating these parameters jointly from multichannel recordings of audio can be extremely beneficial. Many joint estimators are based...... on knowledge of the inverse sample covariance matrix. Typically, this covariance is estimated using the sample covariance matrix, but for this estimate to be full rank, many temporal samples are needed. In cases with non-stationary signals, this is a serious limitation. We therefore investigate how a recent...... joint DOA and pitch filtering-based estimator can be combined with the iterative adaptive approach to circumvent this limitation in joint DOA and pitch estimation of audio sources. Simulations show a clear improvement compared to when using the sample covariance matrix and the considered approach also...

  12. Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues

    Directory of Open Access Journals (Sweden)

    W. H. Adams

    2003-02-01

    Full Text Available We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM, hidden Markov models (HMM, and support vector machines (SVM. Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.

  13. Listened To Any Good Books Lately? The Prosodic Analysis of Audio Book Narration

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2006-06-01

    Full Text Available The popularity of audio books is increasing. In the USA fewer people are reading books but many more are listening to them on tapes, CD’s and in MP3 format. The phenomenon is redefining the notion of reading. The purpose of the paper is to present some pros and cons of listening to books instead of reading them. The conclusions have been reached on the basis of a linguistic analysis of parts of two audio books belonging to two different literary genres: a crime novel (Dan Brown, The Da Vinci Code and a comic one (Helen Fielding, Bridget Jones: The Edge of Reason.

  14. Effect of Cartoon Illustrations on the Comprehension and Evaluation of Information Presented in the Print and Audio Mode.

    Science.gov (United States)

    Sewell, Edward H., Jr.

    This study investigates the effects of cartoon illustrations on female and male college student comprehension and evaluation of information presented in several combinations of print, audio, and visual formats. Subjects were assigned to one of five treatment groups: printed text, printed text with cartoons, audiovisual presentations, audio only…

  15. Audio frequency in vivo optical coherence elastography

    Science.gov (United States)

    Adie, Steven G.; Kennedy, Brendan F.; Armstrong, Julian J.; Alexandrov, Sergey A.; Sampson, David D.

    2009-05-01

    We present a new approach to optical coherence elastography (OCE), which probes the local elastic properties of tissue by using optical coherence tomography to measure the effect of an applied stimulus in the audio frequency range. We describe the approach, based on analysis of the Bessel frequency spectrum of the interferometric signal detected from scatterers undergoing periodic motion in response to an applied stimulus. We present quantitative results of sub-micron excitation at 820 Hz in a layered phantom and the first such measurements in human skin in vivo.

  16. Audio frequency in vivo optical coherence elastography

    International Nuclear Information System (INIS)

    Adie, Steven G; Kennedy, Brendan F; Armstrong, Julian J; Alexandrov, Sergey A; Sampson, David D

    2009-01-01

    We present a new approach to optical coherence elastography (OCE), which probes the local elastic properties of tissue by using optical coherence tomography to measure the effect of an applied stimulus in the audio frequency range. We describe the approach, based on analysis of the Bessel frequency spectrum of the interferometric signal detected from scatterers undergoing periodic motion in response to an applied stimulus. We present quantitative results of sub-micron excitation at 820 Hz in a layered phantom and the first such measurements in human skin in vivo.

  17. Audio-Visual Processing in Meetings: Seven Questions and Current AMI Answers

    NARCIS (Netherlands)

    Al-Hamas, Marc; Hain, Thomas; Cernocky, Jan; Schreiber, Sascha; Poel, Mannes; Rienks, R.J.

    2007-01-01

    The project Augmented Multi-party Interaction (AMI) is concerned with the development of meeting browsers and remote meeting assistants for instrumented meeting rooms – and the required component technologies R&D themes: group dynamics, audio, visual, and multimodal processing, content abstraction,

  18. An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.

    Science.gov (United States)

    Albert, Richard N.

    Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…

  19. Effect of audio in-vehicle red light-running warning message on driving behavior based on a driving simulator experiment.

    Science.gov (United States)

    Yan, Xuedong; Liu, Yang; Xu, Yongcun

    2015-01-01

    Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.

  20. Can audio coached 4D CT emulate free breathing during the treatment course?

    International Nuclear Information System (INIS)

    Persson, Gitte F.; Nygaard, Ditte E.; Olsen, Mikael; Juhler-Noettrup, Trine; Pedersen, Anders N.; Specht, Lena; Korreman, Stine S.

    2008-01-01

    Background. The image quality of 4DCT depends on breathing regularity. Respiratory audio coaching may improve regularity and reduce motion artefacts. We question the safety of coached planning 4DCT without coaching during treatment. We investigated the possibility of coaching to a more stable breathing without changing the breathing amplitude. The interfraction variation of the breathing cycle amplitude in free and coached breathing was studied as well as the possible impact of fatigue on longer coaching sessions. Methods. Thirteen volunteers completed respiratory audio coaching on 3 days within a 2 week period. An external marker system monitoring the motion of the thoraco-abdominal wall was used to track the respiration. On all days, free breathing and two coached breathing curves were recorded. We assumed that free versus coached breathing from day 1 (reference session) simulated breathing during an uncoached versus coached planning 4DCT, respectively, and compared the mean breathing cycle amplitude to the free versus coached breathing from day 2 and 3 simulating free versus coached breathing during treatment. Results. For most volunteers it was impossible to apply coaching without changes in breathing cycle amplitude. No significant decrease in standard deviation of breathing cycle amplitude distribution was seen. Generally it was not possible to predict the breathing cycle amplitude and its variation the following days based on the breathing in the reference session irrespective of coaching or free breathing. We found a significant tendency towards an increased breathing cycle amplitude variation with the duration of the coaching session. Conclusion. These results suggest that large interfraction variation is present in breathing amplitude irrespective of coaching, leading to the suggestion of daily image guidance for verification of respiratory pattern and tumour related motion. Until further investigated it is not recommendable to use coached 4DCT for