WorldWideScience

Sample records for processing audio signals

  1. An introduction to audio content analysis applications in signal processing and music informatics

    CERN Document Server

    Lerch, Alexander

    2012-01-01

    "With the proliferation of digital audio distribution over digital media, audio content analysis is fast becoming a requirement for designers of intelligent signal-adaptive audio processing systems. Written by a well-known expert in the field, this book provides quick access to different analysis algorithms and allows comparison between different approaches to the same task, making it useful for newcomers to audio signal processing and industry experts alike. A review of relevant fundamentals in audio signal processing, psychoacoustics, and music theory, as well as downloadable MATLAB files are also included"--

  2. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  3. Digital signal processor for silicon audio playback devices; Silicon audio saisei kikiyo digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)

  4. Modified DCTNet for audio signals classification

    Science.gov (United States)

    Xian, Yin; Pu, Yunchen; Gan, Zhe; Lu, Liang; Thompson, Andrew

    2016-10-01

    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to human audio perception than features such as Mel-frequency spectral coefficients (MFSC). We use features extracted by the A-DCTNet as input for classifiers. Experimental results show that the A-DCTNet and Recurrent Neural Networks (RNN) achieve state-of-the-art performance in bird song classification rate, and improve artist identification accuracy in music data. They demonstrate A-DCTNet's applicability to signal processing problems.

  5. High-Order Sparse Linear Predictors for Audio Processing

    DEFF Research Database (Denmark)

    Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll

    2010-01-01

    Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efficiently the different...

  6. Book review: An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics by Alexander Lerch

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A critical review of the book: An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics, by Alexander Lerch, October 2012, Wiley-IEEE Press. ISBN: 978-1-118-26682-3, Hardcover, 272 pages, 503 references. List price $125.00......A critical review of the book: An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics, by Alexander Lerch, October 2012, Wiley-IEEE Press. ISBN: 978-1-118-26682-3, Hardcover, 272 pages, 503 references. List price $125.00...

  7. 47 CFR 10.520 - Common audio attention signal.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Common audio attention signal. 10.520 Section... Equipment Requirements § 10.520 Common audio attention signal. A Participating CMS Provider and equipment manufacturers may only market devices for public use under part 10 that include an audio attention signal that...

  8. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Umapathy Karthikeyan

    2007-01-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  9. Detection and Correction of Under-/Overexposed Optical Soundtracks by Coupling Image and Audio Signal Processing

    Directory of Open Access Journals (Sweden)

    Etienne Decenciere

    2008-10-01

    Full Text Available Film restoration using image processing, has been an active research field during the last years. However, the restoration of the soundtrack has been mainly performed in the sound domain, using signal processing methods, despite the fact that it is recorded as a continuous image between the images of the film and the perforations. While the very few published approaches focus on removing dust particles or concealing larger corrupted areas, no published works are devoted to the restoration of soundtracks degraded by substantial underexposure or overexposure. Digital restoration of optical soundtracks is an unexploited application field and, besides, scientifically rich, because it allows mixing both image and signal processing approaches. After introducing the principles of optical soundtrack recording and playback, this contribution focuses on our first approaches to detect and cancel the effects of under and overexposure. We intentionally choose to get a quantification of the effect of bad exposure in the 1D audio signal domain instead of 2D image domain. Our measurement is sent as feedback value to an image processing stage where the correction takes place, building up a “digital image and audio signal” closed loop processing. The approach is validated on both simulated alterations and real data.

  10. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Karthikeyan Umapathy

    2007-08-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  11. pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.

    Science.gov (United States)

    Giannakopoulos, Theodoros

    2015-01-01

    Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.

  12. Adaptive DCTNet for Audio Signal Classification

    OpenAIRE

    Xian, Yin; Pu, Yunchen; Gan, Zhe; Lu, Liang; Thompson, Andrew

    2016-01-01

    In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to h...

  13. Distortion-Free 1-Bit PWM Coding for Digital Audio Signals

    Directory of Open Access Journals (Sweden)

    John Mourjopoulos

    2007-01-01

    Full Text Available Although uniformly sampled pulse width modulation (UPWM represents a very efficient digital audio coding scheme for digital-to-analog conversion and full-digital amplification, it suffers from strong harmonic distortions, as opposed to benign non-harmonic artifacts present in analog PWM (naturally sampled PWM, NPWM. Complete elimination of these distortions usually requires excessive oversampling of the source PCM audio signal, which results to impractical realizations of digital PWM systems. In this paper, a description of digital PWM distortion generation mechanism is given and a novel principle for their minimization is proposed, based on a process having some similarity to the dithering principle employed in multibit signal quantization. This conditioning signal is termed “jither” and it can be applied either in the PCM amplitude or the PWM time domain. It is shown that the proposed method achieves significant decrement of the harmonic distortions, rendering digital PWM performance equivalent to that of source PCM audio, for mild oversampling (e.g., ×4 resulting to typical PWM clock rates of 90 MHz.

  14. Distortion-Free 1-Bit PWM Coding for Digital Audio Signals

    Directory of Open Access Journals (Sweden)

    Mourjopoulos John

    2007-01-01

    Full Text Available Although uniformly sampled pulse width modulation (UPWM represents a very efficient digital audio coding scheme for digital-to-analog conversion and full-digital amplification, it suffers from strong harmonic distortions, as opposed to benign non-harmonic artifacts present in analog PWM (naturally sampled PWM, NPWM. Complete elimination of these distortions usually requires excessive oversampling of the source PCM audio signal, which results to impractical realizations of digital PWM systems. In this paper, a description of digital PWM distortion generation mechanism is given and a novel principle for their minimization is proposed, based on a process having some similarity to the dithering principle employed in multibit signal quantization. This conditioning signal is termed "jither" and it can be applied either in the PCM amplitude or the PWM time domain. It is shown that the proposed method achieves significant decrement of the harmonic distortions, rendering digital PWM performance equivalent to that of source PCM audio, for mild oversampling (e.g., resulting to typical PWM clock rates of 90 MHz.

  15. Detecting double compression of audio signal

    Science.gov (United States)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  16. Real-Time Audio Processing on the T-CREST Multicore Platform

    DEFF Research Database (Denmark)

    Ausin, Daniel Sanz; Pezzarossa, Luca; Schoeberl, Martin

    2017-01-01

    of the audio signal. This paper presents a real-time multicore audio processing system based on the T-CREST platform. T-CREST is a time-predictable multicore processor for real-time embedded systems. Multiple audio effect tasks have been implemented, which can be connected together in different configurations...... forming sequential and parallel effect chains, and using a network-onchip for intercommunication between processors. The evaluation of the system shows that real-time processing of multiple effect configurations is possible, and that the estimation and control of latency ensures real-time behavior.......Multicore platforms are nowadays widely used for audio processing applications, due to the improvement of computational power that they provide. However, some of these systems are not optimized for temporally constrained environments, which often leads to an undesired increase in the latency...

  17. Fault Diagnosis using Audio and Vibration Signals in a Circulating Pump

    International Nuclear Information System (INIS)

    Henríquez, P; Alonso, J B; Ferrer, M A; Travieso, C M; Gómez, G

    2012-01-01

    This paper presents the use of audio and vibration signals in fault diagnosis of a circulating pump. The novelty of this paper is the use of audio signals acquired by microphones. The objective of this paper is to determine if audio signals are capable to distinguish between normal and different abnormal conditions in a circulating pump. In order to compare results, vibration signals are also acquired and analysed. Wavelet package is used to obtain the energies in different frequency bands from the audio and vibration signals. Neural networks are used to evaluate the discrimination ability of the extracted features between normal and fault conditions. The results show that information from sound signals can distinguish between normal and different faulty conditions with a success rate of 83.33%, 98% and 91.33% for each microphone respectively. These success rates are similar and even higher that those obtained from accelerometers (68%, 90.67% and 71.33% for each accelerometer respectively). Success rates also show that the position of microphones and accelerometers affects on the final results.

  18. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  19. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  20. AUTOMATIC SEGMENTATION OF BROADCAST AUDIO SIGNALS USING AUTO ASSOCIATIVE NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2010-12-01

    Full Text Available In this paper, we describe automatic segmentation methods for audio broadcast data. Today, digital audio applications are part of our everyday lives. Since there are more and more digital audio databases in place these days, the importance of effective management for audio databases have become prominent. Broadcast audio data is recorded from the Television which comprises of various categories of audio signals. Efficient algorithms for segmenting the audio broadcast data into predefined categories are proposed. Audio features namely Linear prediction coefficients (LPC, Linear prediction cepstral coefficients, and Mel frequency cepstral coefficients (MFCC are extracted to characterize the audio data. Auto Associative Neural Networks are used to segment the audio data into predefined categories using the extracted features. Experimental results indicate that the proposed algorithms can produce satisfactory results.

  1. Signal Processing

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Signal processing techniques, extensively used nowadays to maximize the performance of audio and video equipment, have been a key part in the design of hardware and software for high energy physics detectors since pioneering applications in the UA1 experiment at CERN in 1979

  2. Digital signal processing

    CERN Document Server

    O'Shea, Peter; Hussain, Zahir M

    2011-01-01

    In three parts, this book contributes to the advancement of engineering education and that serves as a general reference on digital signal processing. Part I presents the basics of analog and digital signals and systems in the time and frequency domain. It covers the core topics: convolution, transforms, filters, and random signal analysis. It also treats important applications including signal detection in noise, radar range estimation for airborne targets, binary communication systems, channel estimation, banking and financial applications, and audio effects production. Part II considers sel

  3. The newest digital signal processing

    International Nuclear Information System (INIS)

    Lee, Chae Uk

    2002-08-01

    This book deal with the newest digital signal processing, which contains introduction on conception of digital signal processing, constitution and purpose, signal and system such as signal, continuos signal, discrete signal and discrete system, I/O expression on impress response, convolution, mutual connection of system and frequency character,z transform of definition, range, application of z transform and relationship with laplace transform, Discrete fourier, Fast fourier transform on IDFT algorithm and FFT application, foundation of digital filter of notion, expression, types, frequency characteristic of digital filter and design order of filter, Design order of filter, Design of FIR digital filter, Design of IIR digital filter, Adaptive signal processing, Audio signal processing, video signal processing and application of digital signal processing.

  4. Augmenting Environmental Interaction in Audio Feedback Systems

    Directory of Open Access Journals (Sweden)

    Seunghun Kim

    2016-04-01

    Full Text Available Audio feedback is defined as a positive feedback of acoustic signals where an audio input and output form a loop, and may be utilized artistically. This article presents new context-based controls over audio feedback, leading to the generation of desired sonic behaviors by enriching the influence of existing acoustic information such as room response and ambient noise. This ecological approach to audio feedback emphasizes mutual sonic interaction between signal processing and the acoustic environment. Mappings from analyses of the received signal to signal-processing parameters are designed to emphasize this specificity as an aesthetic goal. Our feedback system presents four types of mappings: approximate analyses of room reverberation to tempo-scale characteristics, ambient noise to amplitude and two different approximations of resonances to timbre. These mappings are validated computationally and evaluated experimentally in different acoustic conditions.

  5. Digital signal processing methods and algorithms for audio conferencing systems

    OpenAIRE

    Lindström, Fredric

    2007-01-01

    Today, we are interconnected almost all over the planet. Large multinational companies operate worldwide, but also an increasing number of small and medium sized companies do business overseas. As people travel to meet and do businesses, the already exposed earth is subject to even more strain. Audio conferencing is an attractive alternative to travel, which is becoming more and more appreciated. Audio conferences can of course not replace all types of meetings, but can help companies to cut ...

  6. A Perceptually Reweighted Mixed-Norm Method for Sparse Approximation of Audio Signals

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll; Sturm, Bob L.

    2011-01-01

    using standard software. A prominent feature of the new method is that it solves a problem that is closely related to the objective of coding, namely rate-distortion optimization. In computer simulations, we demonstrate the properties of the algorithm and its application to real audio signals.......In this paper, we consider the problem of finding sparse representations of audio signals for coding purposes. In doing so, it is of utmost importance that when only a subset of the present components of an audio signal are extracted, it is the perceptually most important ones. To this end, we...... propose a new iterative algorithm based on two principles: 1) a reweighted l1-norm based measure of sparsity; and 2) a reweighted l2-norm based measure of perceptual distortion. Using these measures, the considered problem is posed as a constrained convex optimization problem that can be solved optimally...

  7. An audio FIR-DAC in a BCD process for high power Class-D amplifiers

    NARCIS (Netherlands)

    Doorn, T.S.; van Tuijl, Adrianus Johannes Maria; Schinkel, Daniel; Annema, Anne J.; Berkhout, M.; Berkhout, M.; Nauta, Bram

    A 322 coefficient semi-digital FIR-DAC using a 1-bit PWM input signal was designed and implemented in a high voltage, audio power bipolar CMOS DMOS (BCD) process. This facilitates digital input signals for an analog class-D amplifier in BCD. The FIR-DAC performance depends on the ISI-resistant

  8. Interactive Teaching of Adaptive Signal Processing

    OpenAIRE

    Stewart, R W; Harteneck, M; Weiss, S

    2000-01-01

    Over the last 30 years adaptive digital signal processing has progressed from being a strictly graduate level advanced class in signal processing theory to a topic that is part of the core curriculum for many undergraduate signal processing classes. The key reason is the continued advance of communications technology, with its need for echo control and equalisation, and the widespread use of adaptive filters in audio, biomedical, and control applications. In this paper we will review the basi...

  9. Mixed-Signal Architectures for High-Efficiency and Low-Distortion Digital Audio Processing and Power Amplification

    Directory of Open Access Journals (Sweden)

    Pierangelo Terreni

    2010-01-01

    Full Text Available The paper addresses the algorithmic and architectural design of digital input power audio amplifiers. A modelling platform, based on a meet-in-the-middle approach between top-down and bottom-up design strategies, allows a fast but still accurate exploration of the mixed-signal design space. Different amplifier architectures are configured and compared to find optimal trade-offs among different cost-functions: low distortion, high efficiency, low circuit complexity and low sensitivity to parameter changes. A novel amplifier architecture is derived; its prototype implements digital processing IP macrocells (oversampler, interpolating filter, PWM cross-point deriver, noise shaper, multilevel PWM modulator, dead time compensator on a single low-complexity FPGA while off-chip components are used only for the power output stage (LC filter and power MOS bridge; no heatsink is required. The resulting digital input amplifier features a power efficiency higher than 90% and a total harmonic distortion down to 0.13% at power levels of tens of Watts. Discussions towards the full-silicon integration of the mixed-signal amplifier in embedded devices, using BCD technology and targeting power levels of few Watts, are also reported.

  10. Car audio using DSP for active sound control. DSP ni yoru active seigyo wo mochiita audio

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, K.; Asano, S.; Furukawa, N. (Mitsubishi Motor Corp., Tokyo (Japan))

    1993-06-01

    In the automobile cabin, there are some unique problems which spoil the quality of sound reproduction from audio equipment, such as the narrow space and/or the background noise. The audio signal processing by using DSP (digital signal processor) makes enable a solution to these problems. A car audio with a high amenity has been successfully made by the active sound control using DSP. The DSP consists of an adder, coefficient multiplier, delay unit, and connections. For the actual processing by DSP, are used functions, such as sound field correction, response and processing of noises during driving, surround reproduction, graphic equalizer processing, etc. High effectiveness of the method was confirmed through the actual driving evaluation test. The present paper describes the actual method of sound control technology using DSP. Especially, the dynamic processing of the noise during driving is discussed in detail. 1 ref., 12 figs., 1 tab.

  11. Parametric time-frequency domain spatial audio

    CERN Document Server

    Delikaris-Manias, Symeon; Politis, Archontis

    2018-01-01

    This book provides readers with the principles and best practices in spatial audio signal processing. It describes how sound fields and their perceptual attributes are captured and analyzed within the time-frequency domain, how essential representation parameters are coded, and how such signals are efficiently reproduced for practical applications. The book is split into four parts starting with an overview of the fundamentals. It then goes on to explain the reproduction of spatial sound before offering an examination of signal-dependent spatial filtering. The book finishes with coverage of both current and future applications and the direction that spatial audio research is heading in. Parametric Time-frequency Domain Spatial Audio focuses on applications in entertainment audio, including music, home cinema, and gaming--covering the capturing and reproduction of spatial sound as well as its generation, transduction, representation, transmission, and perception. This book will teach readers the tools needed...

  12. All About Audio Equalization: Solutions and Frontiers

    Directory of Open Access Journals (Sweden)

    Vesa Välimäki

    2016-05-01

    Full Text Available Audio equalization is a vast and active research area. The extent of research means that one often cannot identify the preferred technique for a particular problem. This review paper bridges those gaps, systemically providing a deep understanding of the problems and approaches in audio equalization, their relative merits and applications. Digital signal processing techniques for modifying the spectral balance in audio signals and applications of these techniques are reviewed, ranging from classic equalizers to emerging designs based on new advances in signal processing and machine learning. Emphasis is placed on putting the range of approaches within a common mathematical and conceptual framework. The application areas discussed herein are diverse, and include well-defined, solvable problems of filter design subject to constraints, as well as newly emerging challenges that touch on problems in semantics, perception and human computer interaction. Case studies are given in order to illustrate key concepts and how they are applied in practice. We also recommend preferred signal processing approaches for important audio equalization problems. Finally, we discuss current challenges and the uncharted frontiers in this field. The source code for methods discussed in this paper is made available at https://code.soundsoftware.ac.uk/projects/allaboutaudioeq.

  13. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  14. DAFX Digital Audio Effects

    CERN Document Server

    2011-01-01

    The rapid development in various fields of Digital Audio Effects, or DAFX, has led to new algorithms and this second edition of the popular book, DAFX: Digital Audio Effects has been updated throughout to reflect progress in the field. It maintains a unique approach to DAFX with a lecture-style introduction into the basics of effect processing. Each effect description begins with the presentation of the physical and acoustical phenomena, an explanation of the signal processing techniques to achieve the effect, followed by a discussion of musical applications and the control of effect parameter

  15. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  16. Subband coding of digital audio signals without loss of quality

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Breeuwer, Marcel; van de Waal, Robbert

    1989-01-01

    A subband coding system for high quality digital audio signals is described. To achieve low bit rates at a high quality level, it exploits the simultaneous masking effect of the human ear. It is shown how this effect can be used in an adaptive bit-allocation scheme. The proposed approach has been

  17. Improved Techniques for Automatic Chord Recognition from Music Audio Signals

    Science.gov (United States)

    Cho, Taemin

    2014-01-01

    This thesis is concerned with the development of techniques that facilitate the effective implementation of capable automatic chord transcription from music audio signals. Since chord transcriptions can capture many important aspects of music, they are useful for a wide variety of music applications and also useful for people who learn and perform…

  18. Content Discovery from Composite Audio : An unsupervised approach

    NARCIS (Netherlands)

    Lu, L.

    2009-01-01

    In this thesis, we developed and assessed a novel robust and unsupervised framework for semantic inference from composite audio signals. We focused on the problem of detecting audio scenes and grouping them into meaningful clusters. Our approach addressed all major steps in a general process of

  19. Recursive nearest neighbor search in a sparse and multiscale domain for comparing audio signals

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Daudet, Laurent

    2011-01-01

    We investigate recursive nearest neighbor search in a sparse domain at the scale of audio signals. Essentially, to approximate the cosine distance between the signals we make pairwise comparisons between the elements of localized sparse models built from large and redundant multiscale dictionaries...

  20. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  1. Precision Scaling of Neural Networks for Efficient Audio Processing

    OpenAIRE

    Ko, Jong Hwan; Fromm, Josh; Philipose, Matthai; Tashev, Ivan; Zarar, Shuayb

    2017-01-01

    While deep neural networks have shown powerful performance in many audio applications, their large computation and memory demand has been a challenge for real-time processing. In this paper, we study the impact of scaling the precision of neural networks on the performance of two common audio processing tasks, namely, voice-activity detection and single-channel speech enhancement. We determine the optimal pair of weight/neuron bit precision by exploring its impact on both the performance and ...

  2. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  3. Modeling Audio Fingerprints : Structure, Distortion, Capacity

    NARCIS (Netherlands)

    Doets, P.J.O.

    2010-01-01

    An audio fingerprint is a compact low-level representation of a multimedia signal. An audio fingerprint can be used to identify audio files or fragments in a reliable way. The use of audio fingerprints for identification consists of two phases. In the enrollment phase known content is fingerprinted,

  4. Ambiguity Function Analysis and Processing for Passive Radar Based on CDR Digital Audio Broadcasting

    Directory of Open Access Journals (Sweden)

    Zhang Qiang

    2015-01-01

    Full Text Available China Digital Radio (CDR broadcasting is a new standard of digital audio broadcasting of FM frequency (87–108 MHz based on our research and development efforts. It is compatible with the frequency spectrum in analog FM radio and satisfies the requirements for smooth transition from analog to digital signal in FM broadcasting in China. This paper focuses on the signal characteristics and processing methods of radio-based passive radar. The signal characteristics and ambiguity function of a passive radar illumination source are analyzed. The adverse effects on the target detection of the side peaks owing to cyclic prefix, the Doppler ambiguity strips because of signal synchronization, and the range of side peaks resulting from the signal discontinuous spectrum are then studied. Finally, methods for suppressing these side peaks are proposed and their effectiveness is verified by simulations.

  5. Modeling binaural signal detection

    NARCIS (Netherlands)

    Breebaart, D.J.

    2001-01-01

    With the advent of multimedia technology and powerful signal processing systems, audio processing and reproduction has gained renewed interest. Examples of products that have been developed are audio coding algorithms to efficiently store and transmit music and speech, or audio reproduction systems

  6. Advances in audio source seperation and multisource audio content retrieval

    Science.gov (United States)

    Vincent, Emmanuel

    2012-06-01

    Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.

  7. Design and Implementation of a Video-Zoom Driven Digital Audio-Zoom System for Portable Digital Imaging Devices

    Science.gov (United States)

    Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won

    In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.

  8. Tools for signal compression applications to speech and audio coding

    CERN Document Server

    Moreau, Nicolas

    2013-01-01

    This book presents tools and algorithms required to compress/uncompress signals such as speech and music. These algorithms are largely used in mobile phones, DVD players, HDTV sets, etc. In a first rather theoretical part, this book presents the standard tools used in compression systems: scalar and vector quantization, predictive quantization, transform quantization, entropy coding. In particular we show the consistency between these different tools. The second part explains how these tools are used in the latest speech and audio coders. The third part gives Matlab programs simulating t

  9. Design and Implementation of a linear-phase equalizer in digital audio signal processing

    NARCIS (Netherlands)

    Slump, Cornelis H.; van Asma, C.G.M.; Barels, J.K.P.; Barels, J.K.P.; Brunink, W.J.A; Drenth, F.B.; Pol, J.V.; Schouten, D.S.; Samsom, M.M.; Samsom, M.M.; Herrmann, O.E.

    1992-01-01

    This contribution presents the four phases of a project aiming at the realization in VLSI of a digital audio equalizer with a linear phase characteristic. The first step includes the identification of the system requirements, based on experience and (psycho-acoustical) literature. Secondly, the

  10. Audio segmentation using Flattened Local Trimmed Range for ecological acoustic space analysis

    Directory of Open Access Journals (Sweden)

    Giovany Vega

    2016-06-01

    Full Text Available The acoustic space in a given environment is filled with footprints arising from three processes: biophony, geophony and anthrophony. Bioacoustic research using passive acoustic sensors can result in thousands of recordings. An important component of processing these recordings is to automate signal detection. In this paper, we describe a new spectrogram-based approach for extracting individual audio events. Spectrogram-based audio event detection (AED relies on separating the spectrogram into background (i.e., noise and foreground (i.e., signal classes using a threshold such as a global threshold, a per-band threshold, or one given by a classifier. These methods are either too sensitive to noise, designed for an individual species, or require prior training data. Our goal is to develop an algorithm that is not sensitive to noise, does not need any prior training data and works with any type of audio event. To do this, we propose: (1 a spectrogram filtering method, the Flattened Local Trimmed Range (FLTR method, which models the spectrogram as a mixture of stationary and non-stationary energy processes and mitigates the effect of the stationary processes, and (2 an unsupervised algorithm that uses the filter to detect audio events. We measured the performance of the algorithm using a set of six thoroughly validated audio recordings and obtained a sensitivity of 94% and a positive predictive value of 89%. These sensitivity and positive predictive values are very high, given that the validated recordings are diverse and obtained from field conditions. The algorithm was then used to extract audio events in three datasets. Features of these audio events were plotted and showed the unique aspects of the three acoustic communities.

  11. Extracting meaning from audio signals - a machine learning approach

    DEFF Research Database (Denmark)

    Larsen, Jan

    2007-01-01

    * Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression......* Machine learning framework for sound search * Genre classification * Music and audio separation * Wind noise suppression...

  12. Virtual Microphones for Multichannel Audio Resynthesis

    Directory of Open Access Journals (Sweden)

    Athanasios Mouchtaris

    2003-09-01

    Full Text Available Multichannel audio offers significant advantages for music reproduction, including the ability to provide better localization and envelopment, as well as reduced imaging distortion. On the other hand, multichannel audio is a demanding media type in terms of transmission requirements. Often, bandwidth limitations prohibit transmission of multiple audio channels. In such cases, an alternative is to transmit only one or two reference channels and recreate the rest of the channels at the receiving end. Here, we propose a system capable of synthesizing the required signals from a smaller set of signals recorded in a particular venue. These synthesized “virtual” microphone signals can be used to produce multichannel recordings that accurately capture the acoustics of that venue. Applications of the proposed system include transmission of multichannel audio over the current Internet infrastructure and, as an extension of the methods proposed here, remastering existing monophonic and stereophonic recordings for multichannel rendering.

  13. Audio Frequency Analysis in Mobile Phones

    Science.gov (United States)

    Aguilar, Horacio Munguía

    2016-01-01

    A new experiment using mobile phones is proposed in which its audio frequency response is analyzed using the audio port for inputting external signal and getting a measurable output. This experiment shows how the limited audio bandwidth used in mobile telephony is the main cause of the poor speech quality in this service. A brief discussion is…

  14. Distortion Estimation in Compressed Music Using Only Audio Fingerprints

    NARCIS (Netherlands)

    Doets, P.J.O.; Lagendijk, R.L.

    2008-01-01

    An audio fingerprint is a compact yet very robust representation of the perceptually relevant parts of an audio signal. It can be used for content-based audio identification, even when the audio is severely distorted. Audio compression changes the fingerprint slightly. We show that these small

  15. A 240W Monolithic Class-D Audio Amplifier Output Stage

    OpenAIRE

    Nyboe, Flemming; Kaya, Cetin; Risbo, Lars; Andreani, Pietro

    2006-01-01

    A single-channel class-D audio amplifier output stage outputs 240W undipped into 4Omega 0.1% open-loop THD+N allows using the device in a fully-digital audio signal path with no feedback. The output current capability is plusmn18A and the part is fabricated in a 0.4mum/1.8mum high-voltage BiCMOS process. Over-current sensing protects the output from short circuits.

  16. Small signal audio design

    CERN Document Server

    Self, Douglas

    2014-01-01

    Learn to use inexpensive and readily available parts to obtain state-of-the-art performance in all the vital parameters of noise, distortion, crosstalk and so on. With ample coverage of preamplifiers and mixers and a new chapter on headphone amplifiers, this practical handbook provides an extensive repertoire of circuits that can be put together to make almost any type of audio system.A resource packed full of valuable information, with virtually every page revealing nuggets of specialized knowledge not found elsewhere. Essential points of theory that bear on practical performance are lucidly

  17. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  18. Audible Aliasing Distortion in Digital Audio Synthesis

    Directory of Open Access Journals (Sweden)

    J. Schimmel

    2012-04-01

    Full Text Available This paper deals with aliasing distortion in digital audio signal synthesis of classic periodic waveforms with infinite Fourier series, for electronic musical instruments. When these waveforms are generated in the digital domain then the aliasing appears due to its unlimited bandwidth. There are several techniques for the synthesis of these signals that have been designed to avoid or reduce the aliasing distortion. However, these techniques have high computing demands. One can say that today's computers have enough computing power to use these methods. However, we have to realize that today’s computer-aided music production requires tens of multi-timbre voices generated simultaneously by software synthesizers and the most of the computing power must be reserved for hard-disc recording subsystem and real-time audio processing of many audio channels with a lot of audio effects. Trivially generated classic analog synthesizer waveforms are therefore still effective for sound synthesis. We cannot avoid the aliasing distortion but spectral components produced by the aliasing can be masked with harmonic components and thus made inaudible if sufficient oversampling ratio is used. This paper deals with the assessment of audible aliasing distortion with the help of a psychoacoustic model of simultaneous masking and compares the computing demands of trivial generation using oversampling with those of other methods.

  19. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    Science.gov (United States)

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be

  20. Portable audio electronics for impedance-based measurements in microfluidics

    International Nuclear Information System (INIS)

    Wood, Paul; Sinton, David

    2010-01-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1–50 mM), flow rate (2–120 µL min −1 ) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ∼10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems. (technical note)

  1. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  2. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  3. Instrumental Landing Using Audio Indication

    Science.gov (United States)

    Burlak, E. A.; Nabatchikov, A. M.; Korsun, O. N.

    2018-02-01

    The paper proposes an audio indication method for presenting to a pilot the information regarding the relative positions of an aircraft in the tasks of precision piloting. The implementation of the method is presented, the use of such parameters of audio signal as loudness, frequency and modulation are discussed. To confirm the operability of the audio indication channel the experiments using modern aircraft simulation facility were carried out. The simulated performed the instrument landing using the proposed audio method to indicate the aircraft deviations in relation to the slide path. The results proved compatible with the simulated instrumental landings using the traditional glidescope pointers. It inspires to develop the method in order to solve other precision piloting tasks.

  4. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  5. Digital Augmented Reality Audio Headset

    Directory of Open Access Journals (Sweden)

    Jussi Rämö

    2012-01-01

    Full Text Available Augmented reality audio (ARA combines virtual sound sources with the real sonic environment of the user. An ARA system can be realized with a headset containing binaural microphones. Ideally, the ARA headset should be acoustically transparent, that is, it should not cause audible modification to the surrounding sound. A practical implementation of an ARA mixer requires a low-latency headphone reproduction system with additional equalization to compensate for the attenuation and the modified ear canal resonances caused by the headphones. This paper proposes digital IIR filters to realize the required equalization and evaluates a real-time prototype ARA system. Measurements show that the throughput latency of the digital prototype ARA system can be less than 1.4 ms, which is sufficiently small in practice. When the direct and processed sounds are combined in the ear, a comb filtering effect is brought about and appears as notches in the frequency response. The comb filter effect in speech and music signals was studied in a listening test and it was found to be inaudible when the attenuation is 20 dB. Insert ARA headphones have a sufficient attenuation at frequencies above about 1 kHz. The proposed digital ARA system enables several immersive audio applications, such as a virtual audio tourist guide and audio teleconferencing.

  6. Mobile video-to-audio transducer and motion detection for sensory substitution

    Directory of Open Access Journals (Sweden)

    Maxime eAmbard

    2015-10-01

    Full Text Available Visuo-auditory sensory substitution systems are augmented reality devices that translate a video stream into an audio stream in order to help the blind in daily tasks requiring visuo-spatial information. In this work, we present both a new mobile device and a transcoding method specifically designed to sonify moving objects. Frame differencing is used to extract spatial features from the video stream and two-dimensional spatial information is converted into audio cues using pitch, interaural time difference and interaural level difference. Using numerical methods, we attempt to reconstruct visuo-spatial information based on audio signals generated from various video stimuli. We show that despite a contrasted visual background and a highly lossy encoding method, the information in the audio signal is sufficient to allow object localization, object trajectory evaluation, object approach detection, and spatial separation of multiple objects. We also show that this type of audio signal can be interpreted by human users by asking ten subjects to discriminate trajectories based on generated audio signals.

  7. WLAN Technologies for Audio Delivery

    Directory of Open Access Journals (Sweden)

    Nicolas-Alexander Tatlas

    2007-01-01

    Full Text Available Audio delivery and reproduction for home or professional applications may greatly benefit from the adoption of digital wireless local area network (WLAN technologies. The most challenging aspect of such integration relates the synchronized and robust real-time streaming of multiple audio channels to multipoint receivers, for example, wireless active speakers. Here, it is shown that current WLAN solutions are susceptible to transmission errors. A detailed study of the IEEE802.11e protocol (currently under ratification is also presented and all relevant distortions are assessed via an analytical and experimental methodology. A novel synchronization scheme is also introduced, allowing optimized playback for multiple receivers. The perceptual audio performance is assessed for both stereo and 5-channel applications based on either PCM or compressed audio signals.

  8. TECHNICAL NOTE: Portable audio electronics for impedance-based measurements in microfluidics

    Science.gov (United States)

    Wood, Paul; Sinton, David

    2010-08-01

    We demonstrate the use of audio electronics-based signals to perform on-chip electrochemical measurements. Cell phones and portable music players are examples of consumer electronics that are easily operated and are ubiquitous worldwide. Audio output (play) and input (record) signals are voltage based and contain frequency and amplitude information. A cell phone, laptop soundcard and two compact audio players are compared with respect to frequency response; the laptop soundcard provides the most uniform frequency response, while the cell phone performance is found to be insufficient. The audio signals in the common portable music players and laptop soundcard operate in the range of 20 Hz to 20 kHz and are found to be applicable, as voltage input and output signals, to impedance-based electrochemical measurements in microfluidic systems. Validated impedance-based measurements of concentration (0.1-50 mM), flow rate (2-120 µL min-1) and particle detection (32 µm diameter) are demonstrated. The prevailing, lossless, wave audio file format is found to be suitable for data transmission to and from external sources, such as a centralized lab, and the cost of all hardware (in addition to audio devices) is ~10 USD. The utility demonstrated here, in combination with the ubiquitous nature of portable audio electronics, presents new opportunities for impedance-based measurements in portable microfluidic systems.

  9. Acoustic signal recovery by thermal demodulation

    Science.gov (United States)

    Boullosa, R. R.; Santillán, Arturo O.

    2006-10-01

    One operating mode of recently developed thermoacoustic transducers is as an audio speaker that uses an input superimposed on a direct current; as a result, the audio signal occurs at the same frequency as the input signal. To extend the potential applications of these kinds of sources, the authors propose an alternative driving mode in which a simple thermoacoustic device, consisting of a metal film over a substrate and a heat sink, is excited with a high frequency sinusoid that is amplitude modulated by a lower frequency signal. They show that the modulating signal is recovered in the radiated waves due to a mechanism that is inherent to this type of thermoacoustic process. If the frequency of the carrier is higher than 30kHz and any modulating signal (the one of interest) is in the audio frequency range, only this signal will be heard. Thus, the thermoacoustic device operates as an audio-band, self-demodulating speaker.

  10. A heterogeneous multiprocessor architecture for low-power audio signal processing applications

    DEFF Research Database (Denmark)

    Paker, Ozgun; Sparsø, Jens; Haandbæk, Niels

    2001-01-01

    . The processors are tailored for different classes of filtering algorithms (FIR, IIR, N-LMS etc.), and in a typical system the communication among processors occurs at the sampling rate only. The processors are parameterized in word-size, memory-size, etc. and can be instantiated according to the needs...... of the application at hand using a normal synthesis based ASIC design flow. To give an impression of the size of a processor we mention that one of the FIR processors in a prototype design has 16 instructions, a 32 word×16 bit program memory, a 64 word×16 bit data memory and a 25 word×16 bit coefficient memory....... Early results obtained from the design of a prototype chip containing filter processors for a hearing aid application, indicate a power consumption that is an order of magnitude better than current state of the art low-power audio DSPs implemented using full-custom techniques. This is due to: (1...

  11. Efficiency in audio processing : filter banks and transcoding

    NARCIS (Netherlands)

    Lee, Jun Wei

    2007-01-01

    Audio transcoding is the conversion of digital audio from one compressed form A to another compressed form B, where A and B have different compression properties, such as a different bit-rate, sampling frequency or compression method. This is typically achieved by decoding A to an intermediate

  12. Introduction of audio gating to further reduce organ motion in breathing synchronized radiotherapy

    International Nuclear Information System (INIS)

    Kubo, H. Dale; Wang Lili

    2002-01-01

    With breathing synchronized radiotherapy (BSRT), a voltage signal derived from an organ displacement detector is usually displayed on the vertical axis whereas the elapsed time is shown on the horizontal axis. The voltage gate window is set on the breathing voltage signal. Whenever the breathing signal falls between the two gate levels, a gate pulse is produced to enable the treatment machine. In this paper a new gating mechanism, audio (or time-sequence) gating, is introduced and is integrated into the existing voltage gating system. The audio gating takes advantage of the repetitive nature of the breathing signal when repetitive audio instruction is given to the patient. The audio gating is aimed at removing the regions of sharp rises and falls in the breathing signal that cannot be removed by the voltage gating. When the breathing signal falls between voltage gate levels as well as between audio-gate levels, the voltage- and audio-gated radiotherapy (ART) system will generate an AND gate pulse. When this gate pulse is received by a linear accelerator, the linear accelerator becomes 'enabled' for beam delivery and will deliver the beam when all other interlocks are removed. This paper describes a new gating mechanism and a method of recording beam-on signal, both of which are, configured into a laptop computer. The paper also presents evidence of some clinical advantages achieved with the ART system

  13. Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

    Directory of Open Access Journals (Sweden)

    Warrick Roseboom

    2011-04-01

    Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.

  14. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  15. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  16. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  17. Audio-Visual Processing in Meetings: Seven Questions and Current AMI Answers

    NARCIS (Netherlands)

    Al-Hamas, Marc; Hain, Thomas; Cernocky, Jan; Schreiber, Sascha; Poel, Mannes; Rienks, R.J.

    2007-01-01

    The project Augmented Multi-party Interaction (AMI) is concerned with the development of meeting browsers and remote meeting assistants for instrumented meeting rooms – and the required component technologies R&D themes: group dynamics, audio, visual, and multimodal processing, content abstraction,

  18. Digital Signal Processing for In-Vehicle Systems and Safety

    CERN Document Server

    Boyraz, Pinar; Takeda, Kazuya; Abut, Hüseyin

    2012-01-01

    Compiled from papers of the 4th Biennial Workshop on DSP (Digital Signal Processing) for In-Vehicle Systems and Safety this edited collection features world-class experts from diverse fields focusing on integrating smart in-vehicle systems with human factors to enhance safety in automobiles. Digital Signal Processing for In-Vehicle Systems and Safety presents new approaches on how to reduce driver inattention and prevent road accidents. The material addresses DSP technologies in adaptive automobiles, in-vehicle dialogue systems, human machine interfaces, video and audio processing, and in-vehicle speech systems. The volume also features: Recent advances in Smart-Car technology – vehicles that take into account and conform to the driver Driver-vehicle interfaces that take into account the driving task and cognitive load of the driver Best practices for In-Vehicle Corpus Development and distribution Information on multi-sensor analysis and fusion techniques for robust driver monitoring and driver recognition ...

  19. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    Science.gov (United States)

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  1. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Abdeldjalil Aïssa-El-Bey

    2007-03-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  2. Underdetermined Blind Audio Source Separation Using Modal Decomposition

    Directory of Open Access Journals (Sweden)

    Aïssa-El-Bey Abdeldjalil

    2007-01-01

    Full Text Available This paper introduces new algorithms for the blind separation of audio sources using modal decomposition. Indeed, audio signals and, in particular, musical signals can be well approximated by a sum of damped sinusoidal (modal components. Based on this representation, we propose a two-step approach consisting of a signal analysis (extraction of the modal components followed by a signal synthesis (grouping of the components belonging to the same source using vector clustering. For the signal analysis, two existing algorithms are considered and compared: namely the EMD (empirical mode decomposition algorithm and a parametric estimation algorithm using ESPRIT technique. A major advantage of the proposed method resides in its validity for both instantaneous and convolutive mixtures and its ability to separate more sources than sensors. Simulation results are given to compare and assess the performance of the proposed algorithms.

  3. Noise-Canceling Helmet Audio System

    Science.gov (United States)

    Seibert, Marc A.; Culotta, Anthony J.

    2007-01-01

    A prototype helmet audio system has been developed to improve voice communication for the wearer in a noisy environment. The system was originally intended to be used in a space suit, wherein noise generated by airflow of the spacesuit life-support system can make it difficult for remote listeners to understand the astronaut s speech and can interfere with the astronaut s attempt to issue vocal commands to a voice-controlled robot. The system could be adapted to terrestrial use in helmets of protective suits that are typically worn in noisy settings: examples include biohazard, fire, rescue, and diving suits. The system (see figure) includes an array of microphones and small loudspeakers mounted at fixed positions in a helmet, amplifiers and signal-routing circuitry, and a commercial digital signal processor (DSP). Notwithstanding the fixed positions of the microphones and loudspeakers, the system can accommodate itself to any normal motion of the wearer s head within the helmet. The system operates in conjunction with a radio transceiver. An audio signal arriving via the transceiver intended to be heard by the wearer is adjusted in volume and otherwise conditioned and sent to the loudspeakers. The wearer s speech is collected by the microphones, the outputs of which are logically combined (phased) so as to form a microphone- array directional sensitivity pattern that discriminates in favor of sounds coming from vicinity of the wearer s mouth and against sounds coming from elsewhere. In the DSP, digitized samples of the microphone outputs are processed to filter out airflow noise and to eliminate feedback from the loudspeakers to the microphones. The resulting conditioned version of the wearer s speech signal is sent to the transceiver.

  4. Spatial audio reproduction with primary ambient extraction

    CERN Document Server

    He, JianJun

    2017-01-01

    This book first introduces the background of spatial audio reproduction, with different types of audio content and for different types of playback systems. A literature study on the classical and emerging Primary Ambient Extraction (PAE) techniques is presented. The emerging techniques aim to improve the extraction performance and also enhance the robustness of PAE approaches in dealing with more complex signals encountered in practice. The in-depth theoretical study helps readers to understand the rationales behind these approaches. Extensive objective and subjective experiments validate the feasibility of applying PAE in spatial audio reproduction systems. These experimental results, together with some representative audio examples and MATLAB codes of the key algorithms, illustrate clearly the differences among various approaches and also help readers gain insights on selecting different approaches for different applications.

  5. Automatic processing of CERN video, audio and photo archives

    International Nuclear Information System (INIS)

    Kwiatek, M

    2008-01-01

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services

  6. Automatic processing of CERN video, audio and photo archives

    Energy Technology Data Exchange (ETDEWEB)

    Kwiatek, M [CERN, Geneva (Switzerland)], E-mail: Michal.Kwiatek@cem.ch

    2008-07-15

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  7. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  8. DOA Estimation of Audio Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Heusdens, Richard

    2016-01-01

    Reverberation is well-known to have a detrimental impact on many localization methods for audio sources. We address this problem by imposing a model for the early reflections as well as a model for the audio source itself. Using these models, we propose two iterative localization methods...... that estimate the direction-of-arrival (DOA) of both the direct path of the audio source and the early reflections. In these methods, the contribution of the early reflections is essentially subtracted from the signal observations before localization of the direct path component, which may reduce the estimation...

  9. Near-field Localization of Audio

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    Localization of audio sources using microphone arrays has been an important research problem for more than two decades. Many traditional methods for solving the problem are based on a two-stage procedure: first, information about the audio source, such as time differences-of-arrival (TDOAs......) and gain ratios-of-arrival (GROAs) between microphones is estimated, and, second, this knowledge is used to localize the audio source. These methods often have a low computational complexity, but this comes at the cost of a limited estimation accuracy. Therefore, we propose a new localization approach......, where the desired signal is modeled using TDOAs and GROAs, which are determined by the source location. This facilitates the derivation of one-stage, maximum likelihood methods under a white Gaussian noise assumption that is applicable in both near- and far-field scenarios. Simulations show...

  10. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  11. Robust audio-visual speech recognition under noisy audio-video conditions.

    Science.gov (United States)

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  12. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  13. Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL)

    Science.gov (United States)

    2016-07-01

    AFRL-RH-WP-TR-2016-0074 ACTIVE LEARNING FOR AUTOMATIC AUDIO PROCESSING OF UNWRITTEN LANGUAGES (ALAPUL) Dimitra Vergyri Andreas Kathol Wen Wang...FA8650-15-C-9101 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) *Dimitra Vergyri; Andreas Kathol; Wen Wang; Chris Bartels; Julian VanHout...feature transform through deep auto-encoders for better phone recognition performance. We target iterative learning to improve the system through

  14. A New Principle for a High Efficiency Power Audio Amplifier for Use with a Digital Preamplifier

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1986-01-01

    The use of class-B and class-D amlifiers for converting digital audio signals to analog signals is discussed. It is shown that the class-D amplifier is unsuitable due to distortion. Therefore, a new principle involving a switch-mode power supply and a class-B amplifier is suggested. By regulating...... the supply voltage to the amplifier according to the amplitude of the audio signal, a higher efficiency than can be obtained by the current principles is achieved. The regulation can be done very efficiently by generating the control signal to the power supply in advance of the audio signal, made possible...

  15. A new principle for a high-efficiency power audio amplifier for use with a digital preamplifier

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1987-01-01

    The use of class-B and class-D amplifiers for converting digital audio signals to analog signals is discussed. It is shown that the class-D amplifier is unsuitable due to distortion. Therefore a new principle involving a switch-mode power supply and a class-B amplifier is suggested. By regulating...... the supply voltage to the amplifier according to the amplitude of the audio signal, a higher efficiency than can be obtained by the usual principles is achieved. The regulation can be done very efficiently by generating the control signal to the power supply in advance of the audio signal, made possible...

  16. Analysis of musical expression in audio signals

    Science.gov (United States)

    Dixon, Simon

    2003-01-01

    In western art music, composers communicate their work to performers via a standard notation which specificies the musical pitches and relative timings of notes. This notation may also include some higher level information such as variations in the dynamics, tempo and timing. Famous performers are characterised by their expressive interpretation, the ability to convey structural and emotive information within the given framework. The majority of work on audio content analysis focusses on retrieving score-level information; this paper reports on the extraction of parameters describing the performance, a task which requires a much higher degree of accuracy. Two systems are presented: BeatRoot, an off-line beat tracking system which finds the times of musical beats and tracks changes in tempo throughout a performance, and the Performance Worm, a system which provides a real-time visualisation of the two most important expressive dimensions, tempo and dynamics. Both of these systems are being used to process data for a large-scale study of musical expression in classical and romantic piano performance, which uses artificial intelligence (machine learning) techniques to discover fundamental patterns or principles governing expressive performance.

  17. Interpolation Filter Design for Hearing-Aid Audio Class-D Output Stage Application

    DEFF Research Database (Denmark)

    Pracný, Peter; Bruun, Erik; Llimos Muntal, Pere

    2012-01-01

    This paper deals with a design of a digital interpolation filter for a 3rd order multi-bit ΣΔ modulator with over-sampling ratio OSR = 64. The interpolation filter and the ΣΔ modulator are part of the back-end of an audio signal processing system in a hearing-aid application. The aim in this paper...... is to compare this design to designs presented in other state-of-the-art works ranging from hi-fi audio to hearing-aids. By performing comparison, trends and tradeoffs in interpolation filter design are indentified and hearing-aid specifications are derived. The possibilities for hardware reduction...... in the interpolation filter are investigated. Proposed design simplifications presented here result in the least hardware demanding combination of oversampling ratio, number of stages and number of filter taps among a number of filters reported for audio applications....

  18. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    Science.gov (United States)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio

  19. Estimation of the energy ratio between primary and ambience components in stereo audio data

    NARCIS (Netherlands)

    Harma, A.S.

    2011-01-01

    Stereo audio signal is often modeled as a mixture of instantaneously mixed primary components and uncorrelated ambience components. This paper focuses on the estimation of the primary-to-ambience energy ratio, PAR. This measure is useful for signal decomposition in stereo and multichannel audio

  20. Computerized J-H loop tracer for soft magnetic thick films in the audio frequency range

    Directory of Open Access Journals (Sweden)

    Loizos G.

    2014-07-01

    Full Text Available A computerized J-H loop tracer for soft magnetic thick films in the audio frequency range is described. It is a system built on a PXI platform combining PXI modules for control signal generation and data acquisition. The physiscal signals are digitized and the respective data strems are processed, presented and recorded in LabVIEW 7.0.

  1. 106-17 Telemetry Standards Digitized Audio Telemetry Standard Chapter 5

    Science.gov (United States)

    2017-07-01

    Digitized Audio Telemetry Standard 5.1 General This chapter defines continuously variable slope delta (CVSD) modulation as the standard for digitizing...audio signal. The CVSD modulator is, in essence , a 1-bit analog-to-digital converter. The output of this 1-bit encoder is a serial bit stream, where

  2. Real-time Loudspeaker Distance Estimation with Stereo Audio

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Gaubitch, Nikolay; Heusdens, Richard

    2015-01-01

    Knowledge on how a number of loudspeakers are positioned relative to a listening position can be used to enhance the listening experience. Usually, these loudspeaker positions are estimated using calibration signals, either audible or psycho-acoustically hidden inside the desired audio signal...

  3. Class-D audio amplifiers with negative feedback

    OpenAIRE

    Cox, Stephen M.; Candy, B. H.

    2006-01-01

    There are many different designs for audio amplifiers. Class-D, or switching, amplifiers generate their output signal in the form of a high-frequency square wave of variable duty cycle (ratio of on time to off time). The square-wave nature of the output allows a particularly efficient output stage, with minimal losses. The output is ultimately filtered to remove components of the spectrum above the audio range. Mathematical models are derived here for a variety of related class-D amplifier de...

  4. A second-order class-D audio amplifier

    OpenAIRE

    Cox, Stephen M.; Tan, M.T.; Yu, J.

    2011-01-01

    Class-D audio amplifiers are particularly efficient, and this efficiency has led to their ubiquity in a wide range of modern electronic appliances. Their output takes the form of a high-frequency square wave whose duty cycle (ratio of on-time to off-time) is modulated at low frequency according to the audio signal. A mathematical model is developed here for a second-order class-D amplifier design (i.e., containing one second-order integrator) with negative feedback. We derive exact expression...

  5. Cepstral domain modification of audio signals for data embedding: preliminary results

    Science.gov (United States)

    Gopalan, Kaliappan

    2004-06-01

    A method of embedding data in an audio signal using cepstral domain modification is described. Based on successful embedding in the spectral points of perceptually masked regions in each frame of speech, first the technique was extended to embedding in the log spectral domain. This extension resulted at approximately 62 bits /s of embedding with less than 2 percent of bit error rate (BER) for a clean cover speech (from the TIMIT database), and about 2.5 percent for a noisy speech (from an air traffic controller database), when all frames - including silence and transition between voiced and unvoiced segments - were used. Bit error rate increased significantly when the log spectrum in the vicinity of a formant was modified. In the next procedure, embedding by altering the mean cepstral values of two ranges of indices was studied. Tests on both a noisy utterance and a clean utterance indicated barely noticeable perceptual change in speech quality when lower range of cepstral indices - corresponding to vocal tract region - was modified in accordance with data. With an embedding capacity of approximately 62 bits/s - using one bit per each frame regardless of frame energy or type of speech - initial results showed a BER of less than 1.5 percent for a payload capacity of 208 embedded bits using the clean cover speech. BER of less than 1.3 percent resulted for the noisy host with a capacity was 316 bits. When the cepstrum was modified in the region of excitation, BER increased to over 10 percent. With quantization causing no significant problem, the technique warrants further studies with different cepstral ranges and sizes. Pitch-synchronous cepstrum modification, for example, may be more robust to attacks. In addition, cepstrum modification in regions of speech that are perceptually masked - analogous to embedding in frequency masked regions - may yield imperceptible stego audio with low BER.

  6. A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity

    Directory of Open Access Journals (Sweden)

    Maoshen Jia

    2017-12-01

    Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.

  7. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    Science.gov (United States)

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  8. Programmable delay circuit for sparker signal analysis

    Digital Repository Service at National Institute of Oceanography (India)

    Pathak, D.

    The sparker echo signal had been recorded along with the EPC recorder trigger on audio cassettes in a dual channel analog recorder. The sparker signal in the analog form had to be digitised for further signal processing techniques to be performed...

  9. Audio Papers

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh; Samson, Kristine

    2016-01-01

    With this special issue of Seismograf we are happy to present a new format of articles: Audio Papers. Audio papers resemble the regular essay or the academic text in that they deal with a certain topic of interest, but presented in the form of an audio production. The audio paper is an extension...

  10. A Novel Chewing Detection System Based on PPG, Audio, and Accelerometry.

    Science.gov (United States)

    Papapanagiotou, Vasileios; Diou, Christos; Zhou, Lingchuan; van den Boer, Janet; Mars, Monica; Delopoulos, Anastasios

    2017-05-01

    In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smartphones, can be used to robustly extract objective and real-time measurements of human behavior. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this paper, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semifree living dataset of 14 subjects, which includes more than 60 h of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.

  11. Open soundcard as a platform for practical, laboratory study of digital audio

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2014-01-01

    This article investigates how lacking suitable platforms for laboratory exercises becomes a learning problem, limiting the practical experience students gain. In engineering education, laboratory demonstration difficulty of issues like real-time streaming in digital signal and audio processing...... afforded by such laboratories, and their open nature, could testably improve the diversity of demonstrated practical topics, while maintaining engineering students' motivation....

  12. Active Electromagnetic Interference Cancelation for Automotive Switch-Mode Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Knott, Arnold; Pfaffinger, Gerhard; Andersen, Michael A. E.

    2009-01-01

    Recent trends in the automotive audio industry have shown the importance of active noise cancelation (ANC) for major improvements in mobile entertainment environments. These approaches target the acoustical noise in the cabin and superimpose an inverse noise signal to cancel disturbances. Electro......Recent trends in the automotive audio industry have shown the importance of active noise cancelation (ANC) for major improvements in mobile entertainment environments. These approaches target the acoustical noise in the cabin and superimpose an inverse noise signal to cancel disturbances...

  13. Making the Switch to Digital Audio

    Directory of Open Access Journals (Sweden)

    Shannon Gwin Mitchell

    2004-12-01

    Full Text Available In this article, the authors describe the process of converting from analog to digital audio data. They address the step-by-step decisions that they made in selecting hardware and software for recording and converting digital audio, issues of system integration, and cost considerations. The authors present a brief description of how digital audio is being used in their current research project and how it has enhanced the “quality” of their qualitative research.

  14. Realisierung eines verzerrungsarmen Open-Loop Klasse-D Audio-Verstärkers mit SB-ZePoC

    Directory of Open Access Journals (Sweden)

    O. Schnick

    2007-06-01

    Full Text Available In den letzten Jahren hat die Entwicklung von Klasse-D Verstärkern für Audio-Anwendungen ein vermehrtes Interesse auf sich gezogen. Eine Motivation hierfür liegt in der mit dieser Technik extrem hohen erzielbaren Effizienz von über 90%. Die Signale, die Klasse-D Verstärker steuern, sind binär. Immer mehr Audio-Signale werden entweder digital gespeichert (CD, DVD, MP3 oder digital übermittelt (Internet, DRM, DAB, DVB-T, DVB-S, GMS, UMTS, weshalb eine direkte Umsetzung dieser Daten in ein binäres Steuersignal ohne vorherige konventionelle D/A-Wandlung erstrebenswert erscheint.

    Die klassischen Pulsweitenmodulationsverfahren führen zu Aliasing-Komponenten im Audio-Basisband. Diese Verzerrungen können nur durch eine sehr hohe Schaltfrequenz auf ein akzeptables Maß reduziert werden. Durch das von der Forschungsgruppe um Prof. Mathis vorgestellte SB-ZePoC Verfahren (Zero Position Coding with Separated Baseband wird diese Art der Signalverzerrung durch Generierung eines separierten Basisbands verhindert. Deshalb können auch niedrige Schaltfrequenzen gewählt werden. Dadurch werden nicht nur die Schaltverluste, sondern auch Timing-Verzerrungen verringert, die durch die nichtideale Schaltendstufe verursacht werden. Diese tragen einen großen Anteil zu den gesamten Verzerrungen eines Klasse-D Verstärkers bei. Mit dem SB-ZePoC Verfahren lassen sich verzerrungsarme Open-Loop Klasse-D Audio-Verstärker realisieren, die ohne aufwändige Gegenkopplungsschleifen auskommen.

    Class-D amplifiers are suiteble for amplification of audio signals. One argument is their high efficiency of 90% and more. Today most of the audio signals are stored or transmitted in digital form. A digitally controlled Class-D amplifier can be directly driven with coded (modulated data. No separate D/A conversion is needed. Classical modulation schemes like Pulse-Width-Modulation (PWM cause aliasing. So a very high switching rate is required to minimize the

  15. Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain

    Directory of Open Access Journals (Sweden)

    Shijun Xiang

    2017-01-01

    Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.

  16. Advances in preventive monitoring of machinery through audio and vibration signals

    OpenAIRE

    Henríquez Rodríguez, Patricia

    2016-01-01

    Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería. La fecha de publicación es la fecha de lectura. El objetivo de la presente Tesis es la mejora de los sistemas de monitorización de maquinaria en diagnóstico e identificación de fallo usando señales de vibración y de audio en dos aplicaciones (cojinetes y bombas centrífugas) con especial énfasis en la etapa de extracción de características y en la utilización del audio como fuente de información. En el caso...

  17. Intelligent audio analysis

    CERN Document Server

    Schuller, Björn W

    2013-01-01

    This book provides the reader with the knowledge necessary for comprehension of the field of Intelligent Audio Analysis. It firstly introduces standard methods and discusses the typical Intelligent Audio Analysis chain going from audio data to audio features to audio recognition.  Further, an introduction to audio source separation, and enhancement and robustness are given. After the introductory parts, the book shows several applications for the three types of audio: speech, music, and general sound. Each task is shortly introduced, followed by a description of the specific data and methods applied, experiments and results, and a conclusion for this specific task. The books provides benchmark results and standardized test-beds for a broader range of audio analysis tasks. The main focus thereby lies on the parallel advancement of realism in audio analysis, as too often today’s results are overly optimistic owing to idealized testing conditions, and it serves to stimulate synergies arising from transfer of ...

  18. A 240W Monolithic Class-D Audio Amplifier Output Stage

    DEFF Research Database (Denmark)

    Nyboe, Flemming; Kaya, Cetin; Risbo, Lars

    2006-01-01

    A single-channel class-D audio amplifier output stage outputs 240W undipped into 4Omega 0.1% open-loop THD+N allows using the device in a fully-digital audio signal path with no feedback. The output current capability is plusmn18A and the part is fabricated in a 0.4mum/1.8mum high-voltage Bi...

  19. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  20. The audio expert everything you need to know about audio

    CERN Document Server

    Winer, Ethan

    2012-01-01

    The Audio Expert is a comprehensive reference that covers all aspects of audio, with many practical, as well as theoretical, explanations. Providing in-depth descriptions of how audio really works, using common sense plain-English explanations and mechanical analogies with minimal math, the book is written for people who want to understand audio at the deepest, most technical level, without needing an engineering degree. It's presented in an easy-to-read, conversational tone, and includes more than 400 figures and photos augmenting the text.The Audio Expert takes th

  1. Web Audio/Video Streaming Tool

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  2. Perceived Audio Quality Analysis in Digital Audio Broadcasting Plus System Based on PEAQ

    Directory of Open Access Journals (Sweden)

    K. Ulovec

    2018-04-01

    Full Text Available Broadcasters need to decide on bitrates of the services in the multiplex transmitted via Digital Audio Broadcasting Plus system. The bitrate should be set as low as possible for maximal number of services, but with high quality, not lower than in conventional analog systems. In this paper, the objective method Perceptual Evaluation of Audio Quality is used to analyze the perceived audio quality for appropriate codecs --- MP2 and AAC offering three profiles. The main aim is to determine dependencies on the type of signal --- music and speech, the number of channels --- stereo and mono, and the bitrate. Results indicate that only MP2 codec and AAC Low Complexity profile reach imperceptible quality loss. The MP2 codec needs higher bitrate than AAC Low Complexity profile for the same quality. For the both versions of AAC High-Efficiency profiles, the limit bitrates are determined above which less complex profiles outperform the more complex ones and higher bitrates above these limits are not worth using. It is shown that stereo music has worse quality than stereo speech generally, whereas for mono, the dependencies vary upon the codec/profile. Furthermore, numbers of services satisfying various quality criteria are presented.

  3. Realtime Audio with Garbage Collection

    OpenAIRE

    Matheussen, Kjetil Svalastog

    2010-01-01

    Two non-moving concurrent garbage collectors tailored for realtime audio processing are described. Both collectors work on copies of the heap to avoid cache misses and audio-disruptive synchronizations. Both collectors are targeted at multiprocessor personal computers. The first garbage collector works in uncooperative environments, and can replace Hans Boehm's conservative garbage collector for C and C++. The collector does not access the virtual memory system. Neither doe...

  4. AudioPairBank: Towards A Large-Scale Tag-Pair-Based Audio Content Analysis

    OpenAIRE

    Sager, Sebastian; Elizalde, Benjamin; Borth, Damian; Schulze, Christian; Raj, Bhiksha; Lane, Ian

    2016-01-01

    Recently, sound recognition has been used to identify sounds, such as car and river. However, sounds have nuances that may be better described by adjective-noun pairs such as slow car, and verb-noun pairs such as flying insects, which are under explored. Therefore, in this work we investigate the relation between audio content and both adjective-noun pairs and verb-noun pairs. Due to the lack of datasets with these kinds of annotations, we collected and processed the AudioPairBank corpus cons...

  5. Horatio Audio-Describes Shakespeare's "Hamlet": Blind and Low-Vision Theatre-Goers Evaluate an Unconventional Audio Description Strategy

    Science.gov (United States)

    Udo, J. P.; Acevedo, B.; Fels, D. I.

    2010-01-01

    Audio description (AD) has been introduced as one solution for providing people who are blind or have low vision with access to live theatre, film and television content. However, there is little research to inform the process, user preferences and presentation style. We present a study of a single live audio-described performance of Hart House…

  6. Harmonic Enhancement in Low Bitrate Audio Coding Using an Efficient Long-Term Predictor

    Directory of Open Access Journals (Sweden)

    Song Jeongook

    2010-01-01

    Full Text Available This paper proposes audio coding using an efficient long-term prediction method to enhance the perceptual quality of audio codecs to speech input signals at low bit-rates. The MPEG-4 AAC-LTP exploited a similar concept, but its improvement was not significant because of small prediction gain due to long prediction lags and aliased components caused by the transformation with a time-domain aliasing cancelation (TDAC technique. The proposed algorithm increases the prediction gain by employing a deharmonizing predictor and a long-term compensation filter. The look-back memory elements are first constructed by applying the de-harmonizing predictor to the input signal, then the prediction residual is encoded and decoded by transform audio coding. Finally, the long-term compensation filter is applied to the updated look-back memory of the decoded prediction residual to obtain synthesized signals. Experimental results show that the proposed algorithm has much lower spectral distortion and higher perceptual quality than conventional approaches especially for harmonic signals, such as voiced speech.

  7. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Audio Conferencing Enhancements

    OpenAIRE

    VESTERINEN, LEENA

    2006-01-01

    Audio conferencing allows multiple people in distant locations to interact in a single voice call. Whilst it can be very useful service it also has several key disadvantages. This thesis study investigated the options for improving the user experience of the mobile teleconferencing applications. In particular, the use of 3D, spatial audio and visualinteractive functionality was investigated as the means of improving the intelligibility and audio perception during the audio...

  9. Parameter and state estimation using audio and video signals

    OpenAIRE

    Evestedt, Magnus

    2005-01-01

    The complexity of industrial systems and the mathematical models to describe them increases. In many cases point sensors are no longer sufficient to provide controllers and monitoring instruments with the information necessary for operation. The need for other types of information, such as audio and video, has grown. Suitable applications range in a broad spectrum from microelectromechanical systems and bio-medical engineering to papermaking and steel production. This thesis is divided into f...

  10. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone.

    Science.gov (United States)

    Blum, Sarah; Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G

    2017-01-01

    Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.

  11. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone

    Directory of Open Access Journals (Sweden)

    Sarah Blum

    2017-01-01

    Full Text Available Objective. Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android supports a standardized communication interface to exchange information with external software and hardware. Approach. In order to implement a closed-loop brain-computer interface (BCI on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.

  12. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone

    Science.gov (United States)

    Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G.

    2017-01-01

    Objective Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. Approach In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms. PMID:29349070

  13. Semantic Analysis of Multimedial Information Usign Both Audio and Visual Clues

    Directory of Open Access Journals (Sweden)

    Andrej Lukac

    2008-01-01

    Full Text Available Nowadays, there is a lot of information in databases (text, audio/video form, etc.. It is important to be able to describe this data for better orientation in them. It is necessary to apply audio/video properties, which are used for metadata management, segmenting the document into semantically meaningful units, classifying each unit into a predefined scene type, indexing, summarizing the document for efficient retrieval and browsing. Data can be used for system that automatically searches for a specific person in a sequence also for special video sequences. Audio/video properties are presented by descriptors and description schemes. There are many features that can be used to characterize multimedial signals. We can analyze audio and video sequences jointly or considered them completely separately. Our aim is oriented to possibilities of combining multimedial features. Focus is direct into discussion programs, because there are more decisions how to combine audio features with video sequences.

  14. On Modeling Affect in Audio with Non-Linear Symbolic Dynamics

    Directory of Open Access Journals (Sweden)

    Pauline Mouawad

    2017-09-01

    Full Text Available The discovery of semantic information from complex signals is a task concerned with connecting humans’ perceptions and/or intentions with the signals content. In the case of audio signals, complex perceptions are appraised in a listener’s mind, that trigger affective responses that may be relevant for well-being and survival. In this paper we are interested in the broader question of relations between uncertainty in data as measured using various information criteria and emotions, and we propose a novel method that combines nonlinear dynamics analysis with a method of adaptive time series symbolization that finds the meaningful audio structure in terms of symbolized recurrence properties. In a first phase we obtain symbolic recurrence quantification measures from symbolic recurrence plots, without the need to reconstruct the phase space with embedding. Then we estimate symbolic dynamical invariants from symbolized time series, after embedding. The invariants are: correlation dimension, correlation entropy and Lyapunov exponent. Through their application for the logistic map, we show that our measures are in agreement with known methods from literature. We further show that one symbolic recurrence measure, namely the symbolic Shannon entropy, correlates positively with the positive Lyapunov exponents. Finally we evaluate the performance of our measures in emotion recognition through the implementation of classification tasks for different types of audio signals, and show that in some cases, they perform better than state-of-the-art methods that rely on low-level acoustic features.

  15. Collusion-resistant audio fingerprinting system in the modulated complex lapped transform domain.

    Directory of Open Access Journals (Sweden)

    Jose Juan Garcia-Hernandez

    Full Text Available Collusion-resistant fingerprinting paradigm seems to be a practical solution to the piracy problem as it allows media owners to detect any unauthorized copy and trace it back to the dishonest users. Despite the billionaire losses in the music industry, most of the collusion-resistant fingerprinting systems are devoted to digital images and very few to audio signals. In this paper, state-of-the-art collusion-resistant fingerprinting ideas are extended to audio signals and the corresponding parameters and operation conditions are proposed. Moreover, in order to carry out fingerprint detection using just a fraction of the pirate audio clip, block-based embedding and its corresponding detector is proposed. Extensive simulations show the robustness of the proposed system against average collusion attack. Moreover, by using an efficient Fast Fourier Transform core and standard computer machines it is shown that the proposed system is suitable for real-world scenarios.

  16. Radioactive Decay: Audio Data Collection

    Science.gov (United States)

    Struthers, Allan

    2009-01-01

    Many phenomena generate interesting audible time series. This data can be collected and processed using audio software. The free software package "Audacity" is used to demonstrate the process by recording, processing, and extracting click times from an inexpensive radiation detector. The high quality of the data is demonstrated with a simple…

  17. Audio Twister

    DEFF Research Database (Denmark)

    Cermak, Daniel; Moreno Garcia, Rodrigo; Monastiridis, Stefanos

    2015-01-01

    Daniel Cermak-Sassenrath, Rodrigo Moreno Garcia, Stefanos Monastiridis. Audio Twister. Installation. P-Hack Copenhagen 2015, Copenhagen, DK, Apr 24, 2015.......Daniel Cermak-Sassenrath, Rodrigo Moreno Garcia, Stefanos Monastiridis. Audio Twister. Installation. P-Hack Copenhagen 2015, Copenhagen, DK, Apr 24, 2015....

  18. Back to basics audio

    CERN Document Server

    Nathan, Julian

    1998-01-01

    Back to Basics Audio is a thorough, yet approachable handbook on audio electronics theory and equipment. The first part of the book discusses electrical and audio principles. Those principles form a basis for understanding the operation of equipment and systems, covered in the second section. Finally, the author addresses planning and installation of a home audio system.Julian Nathan joined the audio service and manufacturing industry in 1954 and moved into motion picture engineering and production in 1960. He installed and operated recording theaters in Sydney, Austra

  19. WebGL and web audio software lightweight components for multimedia education

    Science.gov (United States)

    Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław

    2017-08-01

    The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.

  20. Multiengine Speech Processing Using SNR Estimator in Variable Noisy Environments

    Directory of Open Access Journals (Sweden)

    Ahmad R. Abu-El-Quran

    2012-01-01

    Full Text Available We introduce a multiengine speech processing system that can detect the location and the type of audio signal in variable noisy environments. This system detects the location of the audio source using a microphone array; the system examines the audio first, determines if it is speech/nonspeech, then estimates the value of the signal to noise (SNR using a Discrete-Valued SNR Estimator. Using this SNR value, instead of trying to adapt the speech signal to the speech processing system, we adapt the speech processing system to the surrounding environment of the captured speech signal. In this paper, we introduced the Discrete-Valued SNR Estimator and a multiengine classifier, using Multiengine Selection or Multiengine Weighted Fusion. Also we use the SI as example of the speech processing. The Discrete-Valued SNR Estimator achieves an accuracy of 98.4% in characterizing the environment's SNR. Compared to a conventional single engine SI system, the improvement in accuracy was as high as 9.0% and 10.0% for the Multiengine Selection and Multiengine Weighted Fusion, respectively.

  1. Overview of the 2015 Workshop on Speech, Language and Audio in Multimedia

    NARCIS (Netherlands)

    Gravier, Guillaume; Jones, Gareth J.F.; Larson, Martha; Ordelman, Roeland J.F.

    2015-01-01

    The Workshop on Speech, Language and Audio in Multimedia (SLAM) positions itself at at the crossroad of multiple scientific fields - music and audio processing, speech processing, natural language processing and multimedia - to discuss and stimulate research results, projects, datasets and

  2. Audio-Tactile Integration in Congenitally and Late Deaf Cochlear Implant Users

    Science.gov (United States)

    Nava, Elena; Bottari, Davide; Villwock, Agnes; Fengler, Ineke; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2014-01-01

    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be “rewired” through auditory reafferentation. PMID:24918766

  3. Audio-tactile integration in congenitally and late deaf cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Elena Nava

    Full Text Available Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation.

  4. MP3 audio-editing software for the department of radiology

    International Nuclear Information System (INIS)

    Hong Qingfen; Sun Canhui; Li Ziping; Meng Quanfei; Jiang Li

    2006-01-01

    Objective: To evaluate the MP3 audio-editing software in the daily work in the department of radiology. Methods: The audio content of daily consultation seminar, held in the department of radiology every morning, was recorded and converted into MP3 audio format by a computer integrated recording device. The audio data were edited, archived, and eventually saved in the computer memory storage media, which was experimentally replayed and applied in the research or teaching. Results: MP3 audio-editing was a simple process and convenient for saving and searching the data. The record could be easily replayed. Conclusion: MP3 audio-editing perfectly records and saves the contents of consultation seminar, and has replaced the conventional hand writing notes. It is a valuable tool in both research and teaching in the department. (authors)

  5. Editing Audio with Audacity

    Directory of Open Access Journals (Sweden)

    Brandon Walsh

    2016-08-01

    Full Text Available For those interested in audio, basic sound editing skills go a long way. Being able to handle and manipulate the materials can help you take control of your object of study: you can zoom in and extract particular moments to analyze, process the audio, and upload the materials to a server to compliment a blog post on the topic. On a more practical level, these skills could also allow you to record and package recordings of yourself or others for distribution. That guest lecture taking place in your department? Record it and edit it yourself! Doing so is a lightweight way to distribute resources among various institutions, and it also helps make the materials more accessible for readers and listeners with a wide variety of learning needs. In this lesson you will learn how to use Audacity to load, record, edit, mix, and export audio files. Sound editing platforms are often expensive and offer extensive capabilities that can be overwhelming to the first-time user, but Audacity is a free and open source alternative that offers powerful capabilities for sound editing with a low barrier for entry. For this lesson we will work with two audio files: a recording of Bach’s Goldberg Variations available from MusOpen and another recording of your own voice that will be made in the course of the lesson. This tutorial uses Audacity 2.1.2, released January 2016.

  6. Computational Analysis and Simulation of Empathic Behaviors: a Survey of Empathy Modeling with Behavioral Signal Processing Framework.

    Science.gov (United States)

    Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S

    2016-05-01

    Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.

  7. Sinusoidal Analysis-Synthesis of Audio Using Perceptual Criteria

    Science.gov (United States)

    Painter, Ted; Spanias, Andreas

    2003-12-01

    This paper presents a new method for the selection of sinusoidal components for use in compact representations of narrowband audio. The method consists of ranking and selecting the most perceptually relevant sinusoids. The idea behind the method is to maximize the matching between the auditory excitation pattern associated with the original signal and the corresponding auditory excitation pattern associated with the modeled signal that is being represented by a small set of sinusoidal parameters. The proposed component-selection methodology is shown to outperform the maximum signal-to-mask ratio selection strategy in terms of subjective quality.

  8. Investigation of signal models and methods for evaluating structures of processing telecommunication information exchange systems under acoustic noise conditions

    Science.gov (United States)

    Kropotov, Y. A.; Belov, A. A.; Proskuryakov, A. Y.; Kolpakov, A. A.

    2018-05-01

    The paper considers models and methods for estimating signals during the transmission of information messages in telecommunication systems of audio exchange. One-dimensional probability distribution functions that can be used to isolate useful signals, and acoustic noise interference are presented. An approach to the estimation of the correlation and spectral functions of the parameters of acoustic signals is proposed, based on the parametric representation of acoustic signals and the components of the noise components. The paper suggests an approach to improving the efficiency of interference cancellation and highlighting the necessary information when processing signals from telecommunications systems. In this case, the suppression of acoustic noise is based on the methods of adaptive filtering and adaptive compensation. The work also describes the models of echo signals and the structure of subscriber devices in operational command telecommunications systems.

  9. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  10. Personalized Audio Systems - a Bayesian Approach

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Hansen, Toke Jansen

    2013-01-01

    Modern audio systems are typically equipped with several user-adjustable parameters unfamiliar to most users listening to the system. To obtain the best possible setting, the user is forced into multi-parameter optimization with respect to the users's own objective and preference. To address this......, the present paper presents a general inter-active framework for personalization of such audio systems. The framework builds on Bayesian Gaussian process regression in which a model of the users's objective function is updated sequentially. The parameter setting to be evaluated in a given trial is selected...

  11. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  12. A scale-up field experiment for the monitoring of a burning process using chemical, audio, and video sensors.

    Science.gov (United States)

    Stavrakakis, P; Agapiou, A; Mikedi, K; Karma, S; Statheropoulos, M; Pallis, G C; Pappa, A

    2014-01-01

    Fires are becoming more violent and frequent resulting in major economic losses and long-lasting effects on communities and ecosystems; thus, efficient fire monitoring is becoming a necessity. A novel triple multi-sensor approach was developed for monitoring and studying the burning of dry forest fuel in an open field scheduled experiment; chemical, optical, and acoustical sensors were combined to record the fire spread. The results of this integrated field campaign for real-time monitoring of the fire event are presented and discussed. Chemical analysis, despite its limitations, corresponded to the burning process with a minor time delay. Nevertheless, the evolution profile of CO2, CO, NO, and O2 were detected and monitored. The chemical monitoring of smoke components enabled the observing of the different fire phases (flaming, smoldering) based on the emissions identified in each phase. The analysis of fire acoustical signals presented accurate and timely response to the fire event. In the same content, the use of a thermographic camera, for monitoring the biomass burning, was also considerable (both profiles of the intensities of average gray and red component greater than 230) and presented similar promising potentials to audio results. Further work is needed towards integrating sensors signals for automation purposes leading to potential applications in real situations.

  13. Automated Speech and Audio Analysis for Semantic Access to Multimedia

    NARCIS (Netherlands)

    Jong, F.M.G. de; Ordelman, R.; Huijbregts, M.

    2006-01-01

    The deployment and integration of audio processing tools can enhance the semantic annotation of multimedia content, and as a consequence, improve the effectiveness of conceptual access tools. This paper overviews the various ways in which automatic speech and audio analysis can contribute to

  14. Automated speech and audio analysis for semantic access to multimedia

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Ordelman, Roeland J.F.; Huijbregts, M.A.H.; Avrithis, Y.; Kompatsiaris, Y.; Staab, S.; O' Connor, N.E.

    2006-01-01

    The deployment and integration of audio processing tools can enhance the semantic annotation of multimedia content, and as a consequence, improve the effectiveness of conceptual access tools. This paper overviews the various ways in which automatic speech and audio analysis can contribute to

  15. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  16. Sensitivity to audio-visual synchrony and its relation to language abilities in children with and without ASD.

    Science.gov (United States)

    Righi, Giulia; Tenenbaum, Elena J; McCormick, Carolyn; Blossom, Megan; Amso, Dima; Sheinkopf, Stephen J

    2018-04-01

    Autism Spectrum Disorder (ASD) is often accompanied by deficits in speech and language processing. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to examine whether young children with ASD show reduced sensitivity to temporal asynchronies in a speech processing task when compared to typically developing controls, and to examine how this sensitivity might relate to language proficiency. Using automated eye tracking methods, we found that children with ASD failed to demonstrate sensitivity to asynchronies of 0.3s, 0.6s, or 1.0s between a video of a woman speaking and the corresponding audio track. In contrast, typically developing children who were language-matched to the ASD group, were sensitive to both 0.6s and 1.0s asynchronies. We also demonstrated that individual differences in sensitivity to audiovisual asynchronies and individual differences in orientation to relevant facial features were both correlated with scores on a standardized measure of language abilities. Results are discussed in the context of attention to visual language and audio-visual processing as potential precursors to language impairment in ASD. Autism Res 2018, 11: 645-653. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that

  17. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio.......This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...

  18. Exploration of a digital audio processing platform using a compositional system level performance estimation framework

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan

    2009-01-01

    This paper presents the application of a compositional simulation based system-level performance estimation framework on a non-trivial industrial case study. The case study is provided by the Danish company Bang & Olufsen ICEpower a/s and focuses on the exploration of a digital mobile audio...... processing platform. A short overview of the compositional performance estimation framework used is given followed by a presentation of how it is used for performance estimation using an iterative refinement process towards the final implementation. Finally, an evaluation in terms of accuracy and speed...

  19. High-Fidelity Piezoelectric Audio Device

    Science.gov (United States)

    Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.

    2003-01-01

    ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.

  20. Audio Description as a Pedagogical Tool

    Directory of Open Access Journals (Sweden)

    Georgina Kleege

    2015-05-01

    Full Text Available Audio description is the process of translating visual information into words for people who are blind or have low vision. Typically such description has focused on films, museum exhibitions, images and video on the internet, and live theater. Because it allows people with visual impairments to experience a variety of cultural and educational texts that would otherwise be inaccessible, audio description is a mandated aspect of disability inclusion, although it remains markedly underdeveloped and underutilized in our classrooms and in society in general. Along with increasing awareness of disability, audio description pushes students to practice close reading of visual material, deepen their analysis, and engage in critical discussions around the methodology, standards and values, language, and role of interpretation in a variety of academic disciplines. We outline a few pedagogical interventions that can be customized to different contexts to develop students' writing and critical thinking skills through guided description of visual material.

  1. Basic digital signal processing

    CERN Document Server

    Lockhart, Gordon B

    1985-01-01

    Basic Digital Signal Processing describes the principles of digital signal processing and experiments with BASIC programs involving the fast Fourier theorem (FFT). The book reviews the fundamentals of the BASIC program, continuous and discrete time signals including analog signals, Fourier analysis, discrete Fourier transform, signal energy, power. The text also explains digital signal processing involving digital filters, linear time-variant systems, discrete time unit impulse, discrete-time convolution, and the alternative structure for second order infinite impulse response (IIR) sections.

  2. A high efficiency PWM CMOS class-D audio power amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Zhangming; Liu Lianxi; Yang Yintang [Institute of Microelectronics, Xidian University, Xi' an 710071 (China); Lei Han, E-mail: zmyh@263.ne [Xi' an Power-Rail Micro Co., Ltd, Xi' an 710075 (China)

    2009-02-15

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 mum CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 muA. The active area of the class-D audio power amplifier is about 1.47 x 1.52 mm{sup 2}. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  3. A high efficiency PWM CMOS class-D audio power amplifier

    International Nuclear Information System (INIS)

    Zhu Zhangming; Liu Lianxi; Yang Yintang; Lei Han

    2009-01-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 μm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 μA. The active area of the class-D audio power amplifier is about 1.47 x 1.52 mm 2 . With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  4. A high efficiency PWM CMOS class-D audio power amplifier

    Science.gov (United States)

    Zhangming, Zhu; Lianxi, Liu; Yintang, Yang; Han, Lei

    2009-02-01

    Based on the difference close-loop feedback technique and the difference pre-amp, a high efficiency PWM CMOS class-D audio power amplifier is proposed. A rail-to-rail PWM comparator with window function has been embedded in the class-D audio power amplifier. Design results based on the CSMC 0.5 μm CMOS process show that the max efficiency is 90%, the PSRR is -75 dB, the power supply voltage range is 2.5-5.5 V, the THD+N in 1 kHz input frequency is less than 0.20%, the quiescent current in no load is 2.8 mA, and the shutdown current is 0.5 μA. The active area of the class-D audio power amplifier is about 1.47 × 1.52 mm2. With the good performance, the class-D audio power amplifier can be applied to several audio power systems.

  5. A Joint Audio-Visual Approach to Audio Localization

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2015-01-01

    Localization of audio sources is an important research problem, e.g., to facilitate noise reduction. In the recent years, the problem has been tackled using distributed microphone arrays (DMA). A common approach is to apply direction-of-arrival (DOA) estimation on each array (denoted as nodes), a...... time-of-flight cameras. Moreover, we propose an optimal method for weighting such DOA and range information for audio localization. Our experiments on both synthetic and real data show that there is a clear, potential advantage of using the joint audiovisual localization framework....

  6. Pitch range variations improve cognitive processing of audio messages

    OpenAIRE

    Rodero Antón, Emma; Potter, Rob F.; Prieto Vives, Pilar, 1965-

    2017-01-01

    This study explores the effect of different speaker intonation strategies in audio messages on attention, autonomic arousal, and memory. An experiment was conducted in which participants listened to 16 radio commercials produced to vary in pitch range across sentences. Dependent variables were self-reported effectiveness and adequacy, psychophysiological arousal and attention, immediate word recall and recognition of information. Results showed that messages conveyed with pitch variations ach...

  7. Introduction to audio analysis a MATLAB approach

    CERN Document Server

    Giannakopoulos, Theodoros

    2014-01-01

    Introduction to Audio Analysis serves as a standalone introduction to audio analysis, providing theoretical background to many state-of-the-art techniques. It covers the essential theory necessary to develop audio engineering applications, but also uses programming techniques, notably MATLAB®, to take a more applied approach to the topic. Basic theory and reproducible experiments are combined to demonstrate theoretical concepts from a practical point of view and provide a solid foundation in the field of audio analysis. Audio feature extraction, audio classification, audio segmentation, au

  8. An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution

    Science.gov (United States)

    Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray

    2013-09-01

    In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when nsteganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.

  9. Roundtable Audio Discussion

    Directory of Open Access Journals (Sweden)

    Chris Bigum

    2007-01-01

    Full Text Available RoundTable on Technology, Teaching and Tools. This is a roundtable audio interview conducted by James Farmer, founder of Edublogs, with Anne Bartlett-Bragg (University of Technology Sydney and Chris Bigum (Deakin University. Skype was used to make and record the audio conference and the resulting sound file was edited by Andrew McLauchlan.

  10. Signals, processes, and systems an interactive multimedia introduction to signal processing

    CERN Document Server

    Karrenberg, Ulrich

    2013-01-01

    This is a very new concept for learning Signal Processing, not only from the physically-based scientific fundamentals, but also from the didactic perspective, based on modern results of brain research. The textbook together with the DVD form a learning system that provides investigative studies and enables the reader to interactively visualize even complex processes. The unique didactic concept is built on visualizing signals and processes on the one hand, and on graphical programming of signal processing systems on the other. The concept has been designed especially for microelectronics, computer technology and communication. The book allows to develop, modify, and optimize useful applications using DasyLab - a professional and globally supported software for metrology and control engineering. With the 3rd edition, the software is also suitable for 64 bit systems running on Windows 7. Real signals can be acquired, processed and played on the sound card of your computer. The book provides more than 200 pre-pr...

  11. Audio-vocal interaction in single neurons of the monkey ventrolateral prefrontal cortex.

    Science.gov (United States)

    Hage, Steffen R; Nieder, Andreas

    2015-05-06

    Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2015 the authors 0270-6474/15/357030-11$15.00/0.

  12. Grafting Acoustic Instruments and Signal Processing: Creative Control and Augmented Expressivity

    DEFF Research Database (Denmark)

    Overholt, Daniel; Freed, Adrian

    In this study, work is presented on a hybrid acoustic / electric violin. The instrument has embedded processing that provides real-time simulation of acoustic body models using DSP techniques able to gradually transform a given body model into another, including extrapolations beyond the models...... acoustic violins with high fidelity. The opportunity to control a virtually malleable body while playing, i.e., a model that changes reverberant resonances in response to player input, results in interesting audio effects. Other common audio effects can also be employed and simultaneously controlled via...... the musician’s movements. For example, gestural tilting of the instrument is tracked via an embedded Inertial Measurement Unit (IMU), which can be assigned to alter parameters such as the wet/dry mix of a simple octave-doubler or other more advanced audio effect, further augmenting the expressivity...

  13. Musical examination to bridge audio data and sheet music

    Science.gov (United States)

    Pan, Xunyu; Cross, Timothy J.; Xiao, Liangliang; Hei, Xiali

    2015-03-01

    The digitalization of audio is commonly implemented for the purpose of convenient storage and transmission of music and songs in today's digital age. Analyzing digital audio for an insightful look at a specific musical characteristic, however, can be quite challenging for various types of applications. Many existing musical analysis techniques can examine a particular piece of audio data. For example, the frequency of digital sound can be easily read and identified at a specific section in an audio file. Based on this information, we could determine the musical note being played at that instant, but what if you want to see a list of all the notes played in a song? While most existing methods help to provide information about a single piece of the audio data at a time, few of them can analyze the available audio file on a larger scale. The research conducted in this work considers how to further utilize the examination of audio data by storing more information from the original audio file. In practice, we develop a novel musical analysis system Musicians Aid to process musical representation and examination of audio data. Musicians Aid solves the previous problem by storing and analyzing the audio information as it reads it rather than tossing it aside. The system can provide professional musicians with an insightful look at the music they created and advance their understanding of their work. Amateur musicians could also benefit from using it solely for the purpose of obtaining feedback about a song they were attempting to play. By comparing our system's interpretation of traditional sheet music with their own playing, a musician could ensure what they played was correct. More specifically, the system could show them exactly where they went wrong and how to adjust their mistakes. In addition, the application could be extended over the Internet to allow users to play music with one another and then review the audio data they produced. This would be particularly

  14. Topological signal processing

    CERN Document Server

    Robinson, Michael

    2014-01-01

    Signal processing is the discipline of extracting information from collections of measurements. To be effective, the measurements must be organized and then filtered, detected, or transformed to expose the desired information.  Distortions caused by uncertainty, noise, and clutter degrade the performance of practical signal processing systems. In aggressively uncertain situations, the full truth about an underlying signal cannot be known.  This book develops the theory and practice of signal processing systems for these situations that extract useful, qualitative information using the mathematics of topology -- the study of spaces under continuous transformations.  Since the collection of continuous transformations is large and varied, tools which are topologically-motivated are automatically insensitive to substantial distortion. The target audience comprises practitioners as well as researchers, but the book may also be beneficial for graduate students.

  15. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  16. A compact electroencephalogram recording device with integrated audio stimulation system

    Science.gov (United States)

    Paukkunen, Antti K. O.; Kurttio, Anttu A.; Leminen, Miika M.; Sepponen, Raimo E.

    2010-06-01

    A compact (96×128×32 mm3, 374 g), battery-powered, eight-channel electroencephalogram recording device with an integrated audio stimulation system and a wireless interface is presented. The recording device is capable of producing high-quality data, while the operating time is also reasonable for evoked potential studies. The effective measurement resolution is about 4 nV at 200 Hz sample rate, typical noise level is below 0.7 μVrms at 0.16-70 Hz, and the estimated operating time is 1.5 h. An embedded audio decoder circuit reads and plays wave sound files stored on a memory card. The activities are controlled by an 8 bit main control unit which allows accurate timing of the stimuli. The interstimulus interval jitter measured is less than 1 ms. Wireless communication is made through bluetooth and the data recorded are transmitted to an external personal computer (PC) interface in real time. The PC interface is implemented with LABVIEW® and in addition to data acquisition it also allows online signal processing, data storage, and control of measurement activities such as contact impedance measurement, for example. The practical application of the device is demonstrated in mismatch negativity experiment with three test subjects.

  17. Real Time Recognition Of Speakers From Internet Audio Stream

    Directory of Open Access Journals (Sweden)

    Weychan Radoslaw

    2015-09-01

    Full Text Available In this paper we present an automatic speaker recognition technique with the use of the Internet radio lossy (encoded speech signal streams. We show an influence of the audio encoder (e.g., bitrate on the speaker model quality. The model of each speaker was calculated with the use of the Gaussian mixture model (GMM approach. Both the speaker recognition and the further analysis were realized with the use of short utterances to facilitate real time processing. The neighborhoods of the speaker models were analyzed with the use of the ISOMAP algorithm. The experiments were based on four 1-hour public debates with 7–8 speakers (including the moderator, acquired from the Polish radio Internet services. The presented software was developed with the MATLAB environment.

  18. Could Audio-Described Films Benefit from Audio Introductions? An Audience Response Study

    Science.gov (United States)

    Romero-Fresco, Pablo; Fryer, Louise

    2013-01-01

    Introduction: Time constraints limit the quantity and type of information conveyed in audio description (AD) for films, in particular the cinematic aspects. Inspired by introductory notes for theatre AD, this study developed audio introductions (AIs) for "Slumdog Millionaire" and "Man on Wire." Each AI comprised 10 minutes of…

  19. A Method to Detect AAC Audio Forgery

    Directory of Open Access Journals (Sweden)

    Qingzhong Liu

    2015-08-01

    Full Text Available Advanced Audio Coding (AAC, a standardized lossy compression scheme for digital audio, which was designed to be the successor of the MP3 format, generally achieves better sound quality than MP3 at similar bit rates. While AAC is also the default or standard audio format for many devices and AAC audio files may be presented as important digital evidences, the authentication of the audio files is highly needed but relatively missing. In this paper, we propose a scheme to expose tampered AAC audio streams that are encoded at the same encoding bit-rate. Specifically, we design a shift-recompression based method to retrieve the differential features between the re-encoded audio stream at each shifting and original audio stream, learning classifier is employed to recognize different patterns of differential features of the doctored forgery files and original (untouched audio files. Experimental results show that our approach is very promising and effective to detect the forgery of the same encoding bit-rate on AAC audio streams. Our study also shows that shift recompression-based differential analysis is very effective for detection of the MP3 forgery at the same bit rate.

  20. Location audio simplified capturing your audio and your audience

    CERN Document Server

    Miles, Dean

    2014-01-01

    From the basics of using camera, handheld, lavalier, and shotgun microphones to camera calibration and mixer set-ups, Location Audio Simplified unlocks the secrets to clean and clear broadcast quality audio no matter what challenges you face. Author Dean Miles applies his twenty-plus years of experience as a professional location operator to teach the skills, techniques, tips, and secrets needed to produce high-quality production sound on location. Humorous and thoroughly practical, the book covers a wide array of topics, such as:* location selection* field mixing* boo

  1. Smartphone audio port data collection cookbook

    Directory of Open Access Journals (Sweden)

    Kyle Forinash

    2018-06-01

    Full Text Available The audio port of a smartphone is designed to send and receive audio but can be harnessed for portable, economical, and accurate data collection from a variety of sources. While smartphones have internal sensors to measure a number of physical phenomena such as acceleration, magnetism and illumination levels, measurement of other phenomena such as voltage, external temperature, or accurate timing of moving objects are excluded. The audio port cannot be only employed to sense external phenomena. It has the additional advantage of timing precision; because audio is recorded or played at a controlled rate separated from other smartphone activities, timings based on audio can be highly accurate. The following outlines unpublished details of the audio port technical elements for data collection, a general data collection recipe and an example timing application for Android devices.

  2. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  3. Securing Digital Audio using Complex Quadratic Map

    Science.gov (United States)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  4. Machine intelligence and signal processing

    CERN Document Server

    Vatsa, Mayank; Majumdar, Angshul; Kumar, Ajay

    2016-01-01

    This book comprises chapters on key problems in machine learning and signal processing arenas. The contents of the book are a result of a 2014 Workshop on Machine Intelligence and Signal Processing held at the Indraprastha Institute of Information Technology. Traditionally, signal processing and machine learning were considered to be separate areas of research. However in recent times the two communities are getting closer. In a very abstract fashion, signal processing is the study of operator design. The contributions of signal processing had been to device operators for restoration, compression, etc. Applied Mathematicians were more interested in operator analysis. Nowadays signal processing research is gravitating towards operator learning – instead of designing operators based on heuristics (for example wavelets), the trend is to learn these operators (for example dictionary learning). And thus, the gap between signal processing and machine learning is fast converging. The 2014 Workshop on Machine Intel...

  5. Structure Learning in Audio

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch

    By having information about the setting a user is in, a computer is able to make decisions proactively to facilitate tasks for the user. Two approaches are taken in this thesis to achieve more information about an audio environment. One approach is that of classifying audio, and a new approach...... investigated. A fast and computationally simple approach that compares recordings and classifies if they are from the same audio environment have been developed, and shows very high accuracy and the ability to synchronize recordings in the case of recording devices which are not connected. A more general model...

  6. TC9447F, single-chip DSP (digital signal processor) for audio; 1 chip audio yo DSP LSI TC9447F

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    TC9447F is a single-chip DSP for audio which builds in 2-channel AD converter/4-channel DA converter. It can build various application programs such as the sound field control like hall simulation, digital filter like equalizer, and dynamic range control, in the program memory (ROM). Further, it builds in {+-}10dB trim use electronic volume for two channels. It also builds data delay use RAM (64K-bit) in, so no RAM to be separately attached is necessary. (translated by NEDO)

  7. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  8. A centralized audio presentation manager

    Energy Technology Data Exchange (ETDEWEB)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in the most perceptible manner through the use of a theoretically and empirically designed rule set.

  9. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  10. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  11. Implementing Audio-CASI on Windows’ Platforms

    Science.gov (United States)

    Cooley, Philip C.; Turner, Charles F.

    2011-01-01

    Audio computer-assisted self interviewing (Audio-CASI) technologies have recently been shown to provide important and sometimes dramatic improvements in the quality of survey measurements. This is particularly true for measurements requiring respondents to divulge highly sensitive information such as their sexual, drug use, or other sensitive behaviors. However, DOS-based Audio-CASI systems that were designed and adopted in the early 1990s have important limitations. Most salient is the poor control they provide for manipulating the video presentation of survey questions. This article reports our experiences adapting Audio-CASI to Microsoft Windows 3.1 and Windows 95 platforms. Overall, our Windows-based system provided the desired control over video presentation and afforded other advantages including compatibility with a much wider array of audio devices than our DOS-based Audio-CASI technologies. These advantages came at the cost of increased system requirements --including the need for both more RAM and larger hard disks. While these costs will be an issue for organizations converting large inventories of PCS to Windows Audio-CASI today, this will not be a serious constraint for organizations and individuals with small inventories of machines to upgrade or those purchasing new machines today. PMID:22081743

  12. An ESL Audio-Script Writing Workshop

    Science.gov (United States)

    Miller, Carla

    2012-01-01

    The roles of dialogue, collaborative writing, and authentic communication have been explored as effective strategies in second language writing classrooms. In this article, the stages of an innovative, multi-skill writing method, which embeds students' personal voices into the writing process, are explored. A 10-step ESL Audio Script Writing Model…

  13. Ultrahigh bandwidth signal processing

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo

    2016-01-01

    Optical time lenses have proven to be very versatile for advanced optical signal processing. Based on a controlled interplay between dispersion and phase-modulation by e.g. four-wave mixing, the processing is phase-preserving, an hence useful for all types of data signals including coherent multi......-level modulation founats. This has enabled processing of phase-modulated spectrally efficient data signals, such as orthogonal frequency division multiplexed (OFDM) signa In that case, a spectral telescope system was used, using two time lenses with different focal lengths (chirp rates), yielding a spectral...... regeneratio These operations require a broad bandwidth nonlinear platform, and novel photonic integrated nonlinear platform like aluminum gallium arsenide nano-waveguides used for 1.28 Tbaud optical signal processing will be described....

  14. Audio frequency in vivo optical coherence elastography

    Science.gov (United States)

    Adie, Steven G.; Kennedy, Brendan F.; Armstrong, Julian J.; Alexandrov, Sergey A.; Sampson, David D.

    2009-05-01

    We present a new approach to optical coherence elastography (OCE), which probes the local elastic properties of tissue by using optical coherence tomography to measure the effect of an applied stimulus in the audio frequency range. We describe the approach, based on analysis of the Bessel frequency spectrum of the interferometric signal detected from scatterers undergoing periodic motion in response to an applied stimulus. We present quantitative results of sub-micron excitation at 820 Hz in a layered phantom and the first such measurements in human skin in vivo.

  15. Audio frequency in vivo optical coherence elastography

    International Nuclear Information System (INIS)

    Adie, Steven G; Kennedy, Brendan F; Armstrong, Julian J; Alexandrov, Sergey A; Sampson, David D

    2009-01-01

    We present a new approach to optical coherence elastography (OCE), which probes the local elastic properties of tissue by using optical coherence tomography to measure the effect of an applied stimulus in the audio frequency range. We describe the approach, based on analysis of the Bessel frequency spectrum of the interferometric signal detected from scatterers undergoing periodic motion in response to an applied stimulus. We present quantitative results of sub-micron excitation at 820 Hz in a layered phantom and the first such measurements in human skin in vivo.

  16. Audio wiring guide how to wire the most popular audio and video connectors

    CERN Document Server

    Hechtman, John

    2012-01-01

    Whether you're a pro or an amateur, a musician or into multimedia, you can't afford to guess about audio wiring. The Audio Wiring Guide is a comprehensive, easy-to-use guide that explains exactly what you need to know. No matter the size of your wiring project or installation, this handy tool provides you with the essential information you need and the techniques to use it. Using The Audio Wiring Guide is like having an expert at your side. By following the clear, step-by-step directions, you can do professional-level work at a fraction of the cost.

  17. Design of batch audio/video conversion platform based on JavaEE

    Science.gov (United States)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  18. Audio watermarking robust against D/A and A/D conversions

    Directory of Open Access Journals (Sweden)

    Xiang Shijun

    2011-01-01

    Full Text Available Abstract Digital audio watermarking robust against digital-to-analog (D/A and analog-to-digital (A/D conversions is an important issue. In a number of watermark application scenarios, D/A and A/D conversions are involved. In this article, we first investigate the degradation due to DA/AD conversions via sound cards, which can be decomposed into volume change, additional noise, and time-scale modification (TSM. Then, we propose a solution for DA/AD conversions by considering the effect of the volume change, additional noise and TSM. For the volume change, we introduce relation-based watermarking method by modifying groups of the energy relation of three adjacent DWT coefficient sections. For the additional noise, we pick up the lowest-frequency coefficients for watermarking. For the TSM, the synchronization technique (with synchronization codes and an interpolation processing operation is exploited. Simulation tests show the proposed audio watermarking algorithm provides a satisfactory performance to DA/AD conversions and those common audio processing manipulations.

  19. Comparative evaluation of audio and audio - tactile methods to improve oral hygiene status of visually impaired school children

    OpenAIRE

    R Krishnakumar; Swarna Swathi Silla; Sugumaran K Durai; Mohan Govindarajan; Syed Shaheed Ahamed; Logeshwari Mathivanan

    2016-01-01

    Background: Visually impaired children are unable to maintain good oral hygiene, as their tactile abilities are often underdeveloped owing to their visual disturbances. Conventional brushing techniques are often poorly comprehended by these children and hence, it was decided to evaluate the effectiveness of audio and audio-tactile methods in improving the oral hygiene of these children. Objective: To evaluate and compare the effectiveness of audio and audio-tactile methods in improving oral h...

  20. Hysteretic self-oscillating bandpass current mode control for Class D audio amplifiers driving capacitive transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis; Knott, Arnold; Andersen, Michael A. E.

    2013-01-01

    A hysteretic self-oscillating bandpass current mode control (BPCM) scheme for Class D audio amplifiers driving capacitive transducers are presented. The scheme provides excellent stability margins and low distortion over a wide range of operating conditions. Small-signal behavior of the amplifier...... the rules of electrostatics have been known as very interesting alternatives to the traditional inefficient electrodynamic transducers. When driving capacitive transducers from a Class D audio amplifier the high impedance nature of the load represents a key challenge. The BPCM control scheme ensures a flat...

  1. A scheme for racquet sports video analysis with the combination of audio-visual information

    Science.gov (United States)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  2. Signal processing for radiation detectors

    CERN Document Server

    Nakhostin, Mohammad

    2018-01-01

    This book provides a clear understanding of the principles of signal processing of radiation detectors. It puts great emphasis on the characteristics of pulses from various types of detectors and offers a full overview on the basic concepts required to understand detector signal processing systems and pulse processing techniques. Signal Processing for Radiation Detectors covers all of the important aspects of signal processing, including energy spectroscopy, timing measurements, position-sensing, pulse-shape discrimination, and radiation intensity measurement. The book encompasses a wide range of applications so that readers from different disciplines can benefit from all of the information. In addition, this resource: * Describes both analog and digital techniques of signal processing * Presents a complete compilation of digital pulse processing algorithms * Extrapolates content from more than 700 references covering classic papers as well as those of today * Demonstrates concepts with more than 340 origin...

  3. Reduction in time-to-sleep through EEG based brain state detection and audio stimulation.

    Science.gov (United States)

    Zhuo Zhang; Cuntai Guan; Ti Eu Chan; Juanhong Yu; Aung Aung Phyo Wai; Chuanchu Wang; Haihong Zhang

    2015-08-01

    We developed an EEG- and audio-based sleep sensing and enhancing system, called iSleep (interactive Sleep enhancement apparatus). The system adopts a closed-loop approach which optimizes the audio recording selection based on user's sleep status detected through our online EEG computing algorithm. The iSleep prototype comprises two major parts: 1) a sleeping mask integrated with a single channel EEG electrode and amplifier, a pair of stereo earphones and a microcontroller with wireless circuit for control and data streaming; 2) a mobile app to receive EEG signals for online sleep monitoring and audio playback control. In this study we attempt to validate our hypothesis that appropriate audio stimulation in relation to brain state can induce faster onset of sleep and improve the quality of a nap. We conduct experiments on 28 healthy subjects, each undergoing two nap sessions - one with a quiet background and one with our audio-stimulation. We compare the time-to-sleep in both sessions between two groups of subjects, e.g., fast and slow sleep onset groups. The p-value obtained from Wilcoxon Signed Rank Test is 1.22e-04 for slow onset group, which demonstrates that iSleep can significantly reduce the time-to-sleep for people with difficulty in falling sleep.

  4. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  5. Automatic Detection and Classification of Audio Events for Road Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Noor Almaadeed

    2018-06-01

    Full Text Available This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.

  6. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  7. Presence and the utility of audio spatialization

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2005-01-01

    The primary concern of this paper is whether the utility of audio spatialization, as opposed to the fidelity of audio spatialization, impacts presence. An experiment is reported that investigates the presence-performance relationship by decoupling spatial audio fidelity (realism) from task...... performance by varying the spatial fidelity of the audio independently of its relevance to performance on the search task that subjects were to perform. This was achieved by having conditions in which subjects searched for a music-playing radio (an active sound source) and having conditions in which...... supplied only nonattenuated audio was detrimental to performance. Even so, this group of subjects consistently had the largest increase in presence scores over the baseline experiment. Further, the Witmer and Singer (1998) presence questionnaire was more sensitive to whether the audio source was active...

  8. Probabilistic Graphical Models for the Analysis and Synthesis of Musical Audio

    Science.gov (United States)

    Hoffmann, Matthew Douglas

    Content-based Music Information Retrieval (MIR) systems seek to automatically extract meaningful information from musical audio signals. This thesis applies new and existing generative probabilistic models to several content-based MIR tasks: timbral similarity estimation, semantic annotation and retrieval, and latent source discovery and separation. In order to estimate how similar two songs sound to one another, we employ a Hierarchical Dirichlet Process (HDP) mixture model to discover a shared representation of the distribution of timbres in each song. Comparing songs under this shared representation yields better query-by-example retrieval quality and scalability than previous approaches. To predict what tags are likely to apply to a song (e.g., "rap," "happy," or "driving music"), we develop the Codeword Bernoulli Average (CBA) model, a simple and fast mixture-of-experts model. Despite its simplicity, CBA performs at least as well as state-of-the-art approaches at automatically annotating songs and finding to what songs in a database a given tag most applies. Finally, we address the problem of latent source discovery and separation by developing two Bayesian nonparametric models, the Shift-Invariant HDP and Gamma Process NMF. These models allow us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a song or set of songs and to isolate or suppress individual source. These models' ability to decide how many latent sources are necessary to model the data is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given song or set of songs. Once they have been fit to data, probabilistic models can also be used to drive the synthesis of new musical audio, both for creative purposes and to qualitatively diagnose what information a model does and does not capture. We also adapt the SIHDP model to create new versions of input audio with arbitrary sample sets, for example, to create

  9. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  10. Digital signal processing the Tevatron BPM signals

    International Nuclear Information System (INIS)

    Cancelo, G.; James, E.; Wolbers, S.

    2005-01-01

    The Beam Position Monitor (TeV BPM) readout system at Fermilab's Tevatron has been updated and is currently being commissioned. The new BPMs use new analog and digital hardware to achieve better beam position measurement resolution. The new system reads signals from both ends of the existing directional stripline pickups to provide simultaneous proton and antiproton measurements. The signals provided by the two ends of the BPM pickups are processed by analog band-pass filters and sampled by 14-bit ADCs at 74.3MHz. A crucial part of this work has been the design of digital filters that process the signal. This paper describes the digital processing and estimation techniques used to optimize the beam position measurement. The BPM electronics must operate in narrow-band and wide-band modes to enable measurements of closed-orbit and turn-by-turn positions. The filtering and timing conditions of the signals are tuned accordingly for the operational modes. The analysis and the optimized result for each mode are presented

  11. Distortion Analysis Toolkit—A Software Tool for Easy Analysis of Nonlinear Audio Systems

    Directory of Open Access Journals (Sweden)

    Jyri Pakarinen

    2010-01-01

    Full Text Available Several audio effects devices deliberately add nonlinear distortion to the processed signal in order to create a desired sound. When creating virtual analog models of nonlinearly distorting devices, it would be very useful to carefully analyze the type of distortion, so that the model could be made as realistic as possible. While traditional system analysis tools such as the frequency response give detailed information on the operation of linear and time-invariant systems, they are less useful for analyzing nonlinear devices. Furthermore, although there do exist separate algorithms for nonlinear distortion analysis, there is currently no unified, easy-to-use tool for rapid analysis of distorting audio systems. This paper offers a remedy by introducing a new software tool for easy analysis of distorting effects. A comparison between a well-known guitar tube amplifier and two commercial software simulations is presented as a case study. This freely available software is written in Matlab language, but the analysis tool can also run as a standalone program, so the user does not need to have Matlab installed in order to perform the analysis.

  12. Digital audio watermarking fundamentals, techniques and challenges

    CERN Document Server

    Xiang, Yong; Yan, Bin

    2017-01-01

    This book offers comprehensive coverage on the most important aspects of audio watermarking, from classic techniques to the latest advances, from commonly investigated topics to emerging research subdomains, and from the research and development achievements to date, to current limitations, challenges, and future directions. It also addresses key topics such as reversible audio watermarking, audio watermarking with encryption, and imperceptibility control methods. The book sets itself apart from the existing literature in three main ways. Firstly, it not only reviews classical categories of audio watermarking techniques, but also provides detailed descriptions, analysis and experimental results of the latest work in each category. Secondly, it highlights the emerging research topic of reversible audio watermarking, including recent research trends, unique features, and the potentials of this subdomain. Lastly, the joint consideration of audio watermarking and encryption is also reviewed. With the help of this...

  13. Experiment and practice on signal processing

    International Nuclear Information System (INIS)

    2002-11-01

    The contents of this book contains basic practice of CEM Tool, discrete time signal and experiment and practice of system, experiment and practice of discrete time signal sampling, practice of frequency analysis, experiment of digital filter design, application of digital signal processing, project related voice, basic principle of signal processing, the technique of basic image signal processing, biology astronomy and Robot soccer with apply of image signal processing technique, control video signal and project related image. It also has an introduction of CEM Linker I. O in the end.

  14. Experiment and practice on signal processing

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-15

    The contents of this book contains basic practice of CEM Tool, discrete time signal and experiment and practice of system, experiment and practice of discrete time signal sampling, practice of frequency analysis, experiment of digital filter design, application of digital signal processing, project related voice, basic principle of signal processing, the technique of basic image signal processing, biology astronomy and Robot soccer with apply of image signal processing technique, control video signal and project related image. It also has an introduction of CEM Linker I. O in the end.

  15. Carrier Distortion in Hysteretic Self-Oscillating Class-D Audio Power

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Kofod; Andersen, Michael A. E.

    2009-01-01

    An important distortion mechanism in hysteretic self-oscillating (SO) class-D (switch mode) power amplifiers-–carrier distortion-–is analyzed and an optimization method is proposed. This mechanism is an issue in any power amplifier application where a high degree of proportionality between input...... and output is required, such as in audio power amplifiers or xDSL drivers. From an average-mode point of view, carrier distortion is shown to be caused by nonlinear variation of the hysteretic comparator input average voltage with the output average voltage. This easily causes total harmonic distortion...... figures in excess of 0.1–0.2%, inadequate for high-quality audio applications. Carrier distortion is shown to be minimized when the feedback system is designed to provide a triangular carrier (sliding) signal at the input of a hysteretic comparator. The proposed optimization method is experimentally...

  16. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  17. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  18. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  19. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Directory of Open Access Journals (Sweden)

    Kirsten E Smayda

    Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger

  20. Semantic Context Detection Using Audio Event Fusion

    Directory of Open Access Journals (Sweden)

    Cheng Wen-Huang

    2006-01-01

    Full Text Available Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model and discriminative (support vector machine (SVM approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  1. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  2. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-06-01

    This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  3. Elicitation of attributes for the evaluation of audio-on audio-interference

    DEFF Research Database (Denmark)

    Francombe, Jon; Mason, R.; Dewhirst, M.

    2014-01-01

    procedure was used to reduce these phrases into a comprehensive set of attributes. Groups of experienced and inexperienced listeners determined nine and eight attributes, respectively. These attribute sets were combined by the listeners to produce a final set of 12 attributes: masking, calming, distraction......An experiment to determine the perceptual attributes of the experience of listening to a target audio program in the presence of an audio interferer was performed. The first stage was a free elicitation task in which a total of 572 phrases were produced. In the second stage, a consensus vocabulary...

  4. AKTIVITAS SEKUNDER AUDIO UNTUK MENJAGA KEWASPADAAN PENGEMUDI MOBIL INDONESIA

    Directory of Open Access Journals (Sweden)

    Iftikar Zahedi Sutalaksana

    2013-03-01

    the awake, alert, and able to process all the stimulus well. The results of this study generate some form of audio response test that is integrated with the system drive in the car. Sound source is played with constant intensity between 80-85 dB. The sound will stop if the driver to respond to the sound stimulus. Response test is designed to be capable of monitoring the driver's level of alertness while driving. Its application is expected to help reduce the rate of traffic accidents in Indonesia. Keywords: driving, secondary activities, audio, alertness, response test

  5. Agency Video, Audio and Imagery Library

    Science.gov (United States)

    Grubbs, Rodney

    2015-01-01

    The purpose of this presentation was to inform the ISS International Partners of the new NASA Agency Video, Audio and Imagery Library (AVAIL) website. AVAIL is a new resource for the public to search for and download NASA-related imagery, and is not intended to replace the current process by which the International Partners receive their Space Station imagery products.

  6. CERN automatic audio-conference service

    CERN Multimedia

    Sierra Moral, R

    2009-01-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first Euro...

  7. CERN automatic audio-conference service

    CERN Document Server

    Sierra Moral, R

    2010-01-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first Euro...

  8. Debugging of Class-D Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Crone, Lasse; Pedersen, Jeppe Arnsdorf; Mønster, Jakob Døllner

    2012-01-01

    Determining and optimizing the performance of a Class-D audio power amplier can be very dicult without knowledge of the use of audio performance measuring equipment and of how the various noise and distortion sources in uence the audio performance. This paper gives an introduction on how to measure...

  9. Foundations of signal processing

    CERN Document Server

    Vetterli, Martin; Goyal, Vivek K

    2014-01-01

    This comprehensive and engaging textbook introduces the basic principles and techniques of signal processing, from the fundamental ideas of signals and systems theory to real-world applications. Students are introduced to the powerful foundations of modern signal processing, including the basic geometry of Hilbert space, the mathematics of Fourier transforms, and essentials of sampling, interpolation, approximation and compression. The authors discuss real-world issues and hurdles to using these tools, and ways of adapting them to overcome problems of finiteness and localisation, the limitations of uncertainty and computational costs. Standard engineering notation is used throughout, making mathematical examples easy for students to follow, understand and apply. It includes over 150 homework problems and over 180 worked examples, specifically designed to test and expand students' understanding of the fundamentals of signal processing, and is accompanied by extensive online materials designed to aid learning, ...

  10. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-12-01

    Full Text Available This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  11. Genomic signal processing

    CERN Document Server

    Shmulevich, Ilya

    2007-01-01

    Genomic signal processing (GSP) can be defined as the analysis, processing, and use of genomic signals to gain biological knowledge, and the translation of that knowledge into systems-based applications that can be used to diagnose and treat genetic diseases. Situated at the crossroads of engineering, biology, mathematics, statistics, and computer science, GSP requires the development of both nonlinear dynamical models that adequately represent genomic regulation, and diagnostic and therapeutic tools based on these models. This book facilitates these developments by providing rigorous mathema

  12. Synthesis of audio spectra using a diffraction model.

    Science.gov (United States)

    Vijayakumar, V; Eswaran, C

    2006-12-01

    It is shown that the intensity variations of an audio signal in the frequency domain can be obtained by using a mathematical function containing a series of weighted complex Bessel functions. With proper choice of values for two parameters, this function can transform an input spectrum of discrete frequencies of unit intensity into the known spectra of different musical instruments. Specific examples of musical instruments are considered for evaluating the performance of this method. It is found that this function yields musical spectra with a good degree of accuracy.

  13. Audio engineering 101 a beginner's guide to music production

    CERN Document Server

    Dittmar, Tim

    2013-01-01

    Audio Engineering 101 is a real world guide for starting out in the recording industry. If you have the dream, the ideas, the music and the creativity but don't know where to start, then this book is for you!Filled with practical advice on how to navigate the recording world, from an author with first-hand, real-life experience, Audio Engineering 101 will help you succeed in the exciting, but tough and confusing, music industry. Covering all you need to know about the recording process, from the characteristics of sound to a guide to microphones to analog versus digital

  14. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  15. Efficient Audio Power Amplification - Challenges

    DEFF Research Database (Denmark)

    Andersen, Michael Andreas E.

    2005-01-01

    For more than a decade efficient audio power amplification has evolved and today switch-mode audio power amplification in various forms are the state-of-the-art. The technical steps that lead to this evolution are described and in addition many of the challenges still to be faced and where...

  16. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  17. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  18. Consequence of audio visual collection in school libraries

    OpenAIRE

    Kuri, Ramesh

    2016-01-01

    The collection of Audio-Visual in library plays important role in teaching and learning. The importance of audio visual (AV) technology in education should not be underestimated. If audio-visual collection in library is carefully planned and designed, it can provide a rich learning environment. In this article, an author discussed the consequences of Audio-Visual collection in libraries especially for students of school library

  19. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  20. New audio applications of beryllium metal

    International Nuclear Information System (INIS)

    Sato, M.

    1977-01-01

    The major applications of beryllium metal in the field of audio appliances are for the vibrating cones for the two types of speakers 'TWITTER' for high range sound and 'SQUAWKER' for mid range sound, and also for beryllium cantilever tube assembled in stereo cartridge. These new applications are based on the characteristic property of beryllium having high ratio of modulus of elasticity to specific gravity. The production of these audio parts is described, and the audio response is shown. (author)

  1. Digitisation of the CERN Audio Archives

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    Since the creation of CERN in 1954 until mid 1980s, the audiovisual service has recorded hundreds of hours of moments of life at CERN on audio tapes. These moments range from inaugurations of new facilities to VIP speeches and general interest cultural seminars The preservation process started in June 2005 On these pictures, we see Waltraud Hug working on an open-reel tape.

  2. Preattentive processing of audio-visual emotional signals

    DEFF Research Database (Denmark)

    Föcker, J.; Gondan, Matthias; Röder, B.

    2011-01-01

    Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly d...

  3. DOA and Pitch Estimation of Audio Sources using IAA-based Filtering

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    For decades, it has been investigated how to separately solve the problems of both direction-of-arrival (DOA) and pitch estimation. Recently, it was found that estimating these parameters jointly from multichannel recordings of audio can be extremely beneficial. Many joint estimators are based...... on knowledge of the inverse sample covariance matrix. Typically, this covariance is estimated using the sample covariance matrix, but for this estimate to be full rank, many temporal samples are needed. In cases with non-stationary signals, this is a serious limitation. We therefore investigate how a recent...... joint DOA and pitch filtering-based estimator can be combined with the iterative adaptive approach to circumvent this limitation in joint DOA and pitch estimation of audio sources. Simulations show a clear improvement compared to when using the sample covariance matrix and the considered approach also...

  4. Advanced digital signal processing and noise reduction

    CERN Document Server

    Vaseghi, Saeed V

    2008-01-01

    Digital signal processing plays a central role in the development of modern communication and information processing systems. The theory and application of signal processing is concerned with the identification, modelling and utilisation of patterns and structures in a signal process. The observation signals are often distorted, incomplete and noisy and therefore noise reduction, the removal of channel distortion, and replacement of lost samples are important parts of a signal processing system. The fourth edition of Advanced Digital Signal Processing and Noise Reduction updates an

  5. Fast digitizing and digital signal processing of detector signals

    International Nuclear Information System (INIS)

    Hannaske, Roland

    2008-01-01

    A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22 Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing. (orig.)

  6. Efficient audio power amplification - challenges

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, Michael A.E.

    2005-07-01

    For more than a decade efficient audio power amplification has evolved and today switch-mode audio power amplification in various forms are the state-of-the-art. The technical steps that lead to this evolution are described and in addition many of the challenges still to be faced and where extensive research and development are needed is covered. (au)

  7. SignalPlant: an open signal processing software platform.

    Science.gov (United States)

    Plesinger, F; Jurco, J; Halamek, J; Jurak, P

    2016-07-01

    The growing technical standard of acquisition systems allows the acquisition of large records, often reaching gigabytes or more in size as is the case with whole-day electroencephalograph (EEG) recordings, for example. Although current 64-bit software for signal processing is able to process (e.g. filter, analyze, etc) such data, visual inspection and labeling will probably suffer from rather long latency during the rendering of large portions of recorded signals. For this reason, we have developed SignalPlant-a stand-alone application for signal inspection, labeling and processing. The main motivation was to supply investigators with a tool allowing fast and interactive work with large multichannel records produced by EEG, electrocardiograph and similar devices. The rendering latency was compared with EEGLAB and proves significantly faster when displaying an image from a large number of samples (e.g. 163-times faster for 75  ×  10(6) samples). The presented SignalPlant software is available free and does not depend on any other computation software. Furthermore, it can be extended with plugins by third parties ensuring its adaptability to future research tasks and new data formats.

  8. Detection Of Alterations In Audio Files Using Spectrograph Analysis

    Directory of Open Access Journals (Sweden)

    Anandha Krishnan G

    2015-08-01

    Full Text Available The corresponding study was carried out to detect changes in audio file using spectrograph. An audio file format is a file format for storing digital audio data on a computer system. A sound spectrograph is a laboratory instrument that displays a graphical representation of the strengths of the various component frequencies of a sound as time passes. The objectives of the study were to find the changes in spectrograph of audio after altering them to compare altering changes with spectrograph of original files and to check for similarity and difference in mp3 and wav. Five different alterations were carried out on each audio file to analyze the differences between the original and the altered file. For altering the audio file MP3 or WAV by cutcopy the file was opened in Audacity. A different audio was then pasted to the audio file. This new file was analyzed to view the differences. By adjusting the necessary parameters the noise was reduced. The differences between the new file and the original file were analyzed. By adjusting the parameters from the dialog box the necessary changes were made. The edited audio file was opened in the software named spek where after analyzing a graph is obtained of that particular file which is saved for further analysis. The original audio graph received was combined with the edited audio file graph to see the alterations.

  9. Audio-Visual Aid in Teaching "Fatty Liver"

    Science.gov (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-01-01

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  10. Advanced Methods of Biomedical Signal Processing

    CERN Document Server

    Cerutti, Sergio

    2011-01-01

    This book grew out of the IEEE-EMBS Summer Schools on Biomedical Signal Processing, which have been held annually since 2002 to provide the participants state-of-the-art knowledge on emerging areas in biomedical engineering. Prominent experts in the areas of biomedical signal processing, biomedical data treatment, medicine, signal processing, system biology, and applied physiology introduce novel techniques and algorithms as well as their clinical or physiological applications. The book provides an overview of a compelling group of advanced biomedical signal processing techniques, such as mult

  11. Handbook of Signal Processing in Acoustics

    CERN Document Server

    Havelock, David; Vorländer, Michael

    2009-01-01

    The Handbook of Signal Processing in Acoustics presents signal processing as it is practiced in the field of acoustics. The Handbook is organized by areas of acoustics, with recognized leaders coordinating the self-contained chapters of each section. It brings together a wide range of perspectives from over 100 authors to reveal the interdisciplinary nature of signal processing in acoustics. Success in acoustic applications often requires juggling both the acoustic and the signal processing parameters of the problem. This handbook brings the key issues from both into perspective and is complementary to other reference material on the two subjects. It is a unique resource for experts and practitioners alike to find new ideas and techniques within the diversity of signal processing in acoustics.

  12. Classroom Audio Distribution in the Postsecondary Setting: A Story of Universal Design for Learning

    Science.gov (United States)

    Flagg-Williams, Joan B.; Bokhorst-Heng, Wendy D.

    2016-01-01

    Classroom Audio Distribution Systems (CADS) consist of amplification technology that enhances the teacher's, or sometimes the student's, vocal signal above the background noise in a classroom. Much research has supported the benefits of CADS for student learning, but most of it has focused on elementary school classrooms. This study investigated…

  13. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    Science.gov (United States)

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  14. AudioMUD: a multiuser virtual environment for blind people.

    Science.gov (United States)

    Sánchez, Jaime; Hassler, Tiago

    2007-03-01

    A number of virtual environments have been developed during the last years. Among them there are some applications for blind people based on different type of audio, from simple sounds to 3-D audio. In this study, we pursued a different approach. We designed AudioMUD by using spoken text to describe the environment, navigation, and interaction. We have also introduced some collaborative features into the interaction between blind users. The core of a multiuser MUD game is a networked textual virtual environment. We developed AudioMUD by adding some collaborative features to the basic idea of a MUD and placed a simulated virtual environment inside the human body. This paper presents the design and usability evaluation of AudioMUD. Blind learners were motivated when interacted with AudioMUD and helped to improve the interaction through audio and interface design elements.

  15. Fractional Processes and Fractional-Order Signal Processing Techniques and Applications

    CERN Document Server

    Sheng, Hu; Qiu, TianShuang

    2012-01-01

    Fractional processes are widely found in science, technology and engineering systems. In Fractional Processes and Fractional-order Signal Processing, some complex random signals, characterized by the presence of a heavy-tailed distribution or non-negligible dependence between distant observations (local and long memory), are introduced and examined from the ‘fractional’ perspective using simulation, fractional-order modeling and filtering and realization of fractional-order systems. These fractional-order signal processing (FOSP) techniques are based on fractional calculus, the fractional Fourier transform and fractional lower-order moments. Fractional Processes and Fractional-order Signal Processing: • presents fractional processes of fixed, variable and distributed order studied as the output of fractional-order differential systems; • introduces FOSP techniques and the fractional signals and fractional systems point of view; • details real-world-application examples of FOSP techniques to demonstr...

  16. Audio Recording of Children with Dyslalia

    OpenAIRE

    Stefan Gheorghe Pentiuc; Maria D. Schipor; Ovidiu A. Schipor

    2008-01-01

    In this paper we present our researches regarding automat parsing of audio recordings. These recordings are obtained from children with dyslalia and are necessary for an accurate identification of speech problems. We develop a software application that helps parsing audio, real time, recordings.

  17. Predicting the Overall Spatial Quality of Automotive Audio Systems

    Science.gov (United States)

    Koya, Daisuke

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial

  18. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  19. Tourism research and audio methods

    DEFF Research Database (Denmark)

    Jensen, Martin Trandberg

    2016-01-01

    Audio methods enriches sensuous tourism ethnographies. • The note suggests five research avenues for future auditory scholarship. • Sensuous tourism research has neglected the role of sounds in embodied tourism experiences.......• Audio methods enriches sensuous tourism ethnographies. • The note suggests five research avenues for future auditory scholarship. • Sensuous tourism research has neglected the role of sounds in embodied tourism experiences....

  20. Newnes audio and Hi-Fi engineer's pocket book

    CERN Document Server

    Capel, Vivian

    2013-01-01

    Newnes Audio and Hi-Fi Engineer's Pocket Book, Second Edition provides concise discussion of several audio topics. The book is comprised of 10 chapters that cover different audio equipment. The coverage of the text includes microphones, gramophones, compact discs, and tape recorders. The book also covers high-quality radio, amplifiers, and loudspeakers. The book then reviews the concepts of sound and acoustics, and presents some facts and formulas relevant to audio. The text will be useful to sound engineers and other professionals whose work involves sound systems.

  1. Audio Recording of Children with Dyslalia

    Directory of Open Access Journals (Sweden)

    Stefan Gheorghe Pentiuc

    2008-01-01

    Full Text Available In this paper we present our researches regarding automat parsing of audio recordings. These recordings are obtained from children with dyslalia and are necessary for an accurate identification of speech problems. We develop a software application that helps parsing audio, real time, recordings.

  2. Audio Journal in an ELT Context

    Directory of Open Access Journals (Sweden)

    Neşe Aysin Siyli

    2012-09-01

    Full Text Available It is widely acknowledged that one of the most serious problems students of English as a foreign language face is their deprivation of practicing the language outside the classroom. Generally, the classroom is the sole environment where they can practice English, which by its nature does not provide rich setting to help students develop their competence by putting the language into practice. Motivated by this need, this descriptive study investigated the impact of audio dialog journals on students’ speaking skills. It also aimed to gain insights into students’ and teacher’s opinions on keeping audio dialog journals outside the class. The data of the study developed from student and teacher audio dialog journals, student written feedbacks, interviews held with the students, and teacher observations. The descriptive analysis of the data revealed that audio dialog journals served a number of functions ranging from cognitive to linguistic, from pedagogical to psychological, and social. The findings and pedagogical implications of the study are discussed in detail.

  3. Music Genre Classification Using MIDI and Audio Features

    Science.gov (United States)

    Cataltepe, Zehra; Yaslan, Yusuf; Sonmez, Abdullah

    2007-12-01

    We report our findings on using MIDI files and audio features from MIDI, separately and combined together, for MIDI music genre classification. We use McKay and Fujinaga's 3-root and 9-leaf genre data set. In order to compute distances between MIDI pieces, we use normalized compression distance (NCD). NCD uses the compressed length of a string as an approximation to its Kolmogorov complexity and has previously been used for music genre and composer clustering. We convert the MIDI pieces to audio and then use the audio features to train different classifiers. MIDI and audio from MIDI classifiers alone achieve much smaller accuracies than those reported by McKay and Fujinaga who used not NCD but a number of domain-based MIDI features for their classification. Combining MIDI and audio from MIDI classifiers improves accuracy and gets closer to, but still worse, accuracies than McKay and Fujinaga's. The best root genre accuracies achieved using MIDI, audio, and combination of them are 0.75, 0.86, and 0.93, respectively, compared to 0.98 of McKay and Fujinaga. Successful classifier combination requires diversity of the base classifiers. We achieve diversity through using certain number of seconds of the MIDI file, different sample rates and sizes for the audio file, and different classification algorithms.

  4. ANALYSIS OF MULTIMODAL FUSION TECHNIQUES FOR AUDIO-VISUAL SPEECH RECOGNITION

    Directory of Open Access Journals (Sweden)

    D.V. Ivanko

    2016-05-01

    Full Text Available The paper deals with analytical review, covering the latest achievements in the field of audio-visual (AV fusion (integration of multimodal information. We discuss the main challenges and report on approaches to address them. One of the most important tasks of the AV integration is to understand how the modalities interact and influence each other. The paper addresses this problem in the context of AV speech processing and speech recognition. In the first part of the review we set out the basic principles of AV speech recognition and give the classification of audio and visual features of speech. Special attention is paid to the systematization of the existing techniques and the AV data fusion methods. In the second part we provide a consolidated list of tasks and applications that use the AV fusion based on carried out analysis of research area. We also indicate used methods, techniques, audio and video features. We propose classification of the AV integration, and discuss the advantages and disadvantages of different approaches. We draw conclusions and offer our assessment of the future in the field of AV fusion. In the further research we plan to implement a system of audio-visual Russian continuous speech recognition using advanced methods of multimodal fusion.

  5. Signal processing: opportunities for superconductive circuits

    International Nuclear Information System (INIS)

    Ralston, R.W.

    1985-01-01

    Prime motivators in the evolution of increasingly sophisticated communication and detection systems are the needs for handling ever wider signal bandwidths and higher data processing speeds. These same needs drive the development of electronic device technology. Until recently the superconductive community has been tightly focused on digital devices for high speed computers. The purpose of this paper is to describe opportunities and challenges which exist for both analog and digital devices in a less familiar area, that of wideband signal processing. The function and purpose of analog signal-processing components, including matched filters, correlators and Fourier transformers, will be described and examples of superconductive implementations given. A canonic signal-processing system is then configured using these components in combination with analog/digital converters and digital output circuits to highlight the important issues of dynamic range, accuracy and equivalent computation rate. Superconductive circuits hold promise for processing signals of 10-GHz bandwidth. Signal processing systems, however, can be properly designed and implemented only through a synergistic combination of the talents of device physicists, circuit designers, algorithm architects and system engineers. An immediate challenge to the applied superconductivity community is to begin sharing ideas with these other researchers

  6. Measuring the electric activity of chick embryos heart through 16 bit audio card monitored by the Goldwavetm software

    Science.gov (United States)

    Silva, Dilson; Cortez, Celia Martins

    2015-12-01

    In the present work we used a high-resolution, low-cost apparatus capable of detecting waves fit inside the sound bandwidth, and the software package GoldwaveTM for graphical display, processing and monitoring the signals, to study aspects of the electric heart activity of early avian embryos, specifically at the 18th Hamburger & Hamilton stage of the embryo development. The species used was the domestic chick (Gallus gallus), and we carried out 23 experiments in which cardiographic spectra of QRS complex waves representing the propagation of depolarization waves through ventricles was recorded using microprobes and reference electrodes directly on the embryos. The results show that technique using 16 bit audio card monitored by the GoldwaveTM software was efficient to study signal aspects of heart electric activity of early avian embryos.

  7. Audio-Visual Classification of Sports Types

    DEFF Research Database (Denmark)

    Gade, Rikke; Abou-Zleikha, Mohamed; Christensen, Mads Græsbøll

    2015-01-01

    In this work we propose a method for classification of sports types from combined audio and visual features ex- tracted from thermal video. From audio Mel Frequency Cepstral Coefficients (MFCC) are extracted, and PCA are applied to reduce the feature space to 10 dimensions. From the visual modali...

  8. Fundamentals of statistical signal processing

    CERN Document Server

    Kay, Steven M

    1993-01-01

    A unified presentation of parameter estimation for those involved in the design and implementation of statistical signal processing algorithms. Covers important approaches to obtaining an optimal estimator and analyzing its performance; and includes numerous examples as well as applications to real- world problems. MARKETS: For practicing engineers and scientists who design and analyze signal processing systems, i.e., to extract information from noisy signals — radar engineer, sonar engineer, geophysicist, oceanographer, biomedical engineer, communications engineer, economist, statistician, physicist, etc.

  9. Radar signal processing and its applications

    CERN Document Server

    Hummel, Robert; Stoica, Petre; Zelnio, Edmund

    2003-01-01

    Radar Signal Processing and Its Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In twelve selected chapters, it describes the latest advances in architectures, design methods, and applications of radar signal processing. The contributors to this work were selected from the leading researchers and practitioners in the field. This work, originally published as Volume 14, Numbers 1-3 of the journal, Multidimensional Systems and Signal Processing, will be valuable to anyone working or researching in the field of radar signal processing. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

  10. Silicon Photonics for Signal Processing of Tbit/s Serial Data Signals

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Ji, Hua; Galili, Michael

    2012-01-01

    In this paper, we describe our recent work on signal processing of terabit per second optical serial data signals using pure silicon waveguides. We employ nonlinear optical signal processing in nanoengineered silicon waveguides to perform demultiplexing and optical waveform sampling of 1.28-Tbit/...

  11. Digital signal processing an experimental approach

    CERN Document Server

    Engelberg, Shlomo

    2008-01-01

    Digital Signal Processing is a mathematically rigorous but accessible treatment of digital signal processing that intertwines basic theoretical techniques with hands-on laboratory instruction. Divided into three parts, the book covers various aspects of the digital signal processing (DSP) ""problem."" It begins with the analysis of discrete-time signals and explains sampling and the use of the discrete and fast Fourier transforms. The second part of the book???covering digital to analog and analog to digital conversion???provides a practical interlude in the mathematical content before Part II

  12. Signal Conditioning An Introduction to Continuous Wave Communication and Signal Processing

    CERN Document Server

    Das, Apurba

    2012-01-01

    "Signal Conditioning” is a comprehensive introduction to electronic signal processing. The book presents the mathematical basics including the implications of various transformed domain representations in signal synthesis and analysis in an understandable and lucid fashion and illustrates the theory through many applications and examples from communication systems. The ease to learn is supported by well-chosen exercises which give readers the flavor of the subject. Supplementary electronic materials available on http://extras.springer.com including MATLAB codes illuminating applications in the domain of one dimensional electrical signal processing, image processing and speech processing. The book is an introduction for students with a basic understanding in engineering or natural sciences.

  13. Weaving together peer assessment, audios and medical vignettes in teaching medical terms

    Science.gov (United States)

    Khan, Lateef M.

    2015-01-01

    Objectives The current study aims at exploring the possibility of aligning peer assessment, audiovisuals, and medical case-report extracts (vignettes) in medical terminology teaching. In addition, the study wishes to highlight the effectiveness of audio materials and medical history vignettes in preventing medical students' comprehension, listening, writing, and pronunciation errors. The study also aims at reflecting the medical students' attitudes towards the teaching and learning process. Methods The study involved 161 medical students who received an intensive medical terminology course through audio and medical history extracts. Peer assessment and formative assessment platforms were applied through fake quizzes in a pre- and post-test manner. An 18-item survey was distributed amongst students to investigate their attitudes and feedback towards the teaching and learning process. Quantitative and qualitative data were analysed using the SPSS software. Results The students did better in the posttests than on the pretests for both the quizzes of audios and medical vignettes showing a t-test of -12.09 and -13.60 respectively. Moreover, out of the 133 students, 120 students (90.22%) responded to the survey questions. The students gave positive attitudes towards the application of audios and vignettes in the teaching and learning of medical terminology and towards the learning process. Conclusions The current study revealed that the teaching and learning of medical terminology have more room for the application of advanced technologies, effective assessment platforms, and active learning strategies in higher education. It also highlights that students are capable of carrying more responsibilities of assessment, feedback, and e-learning. PMID:26637986

  14. CERN automatic audio-conference service

    International Nuclear Information System (INIS)

    Sierra Moral, Rodrigo

    2010-01-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  15. CERN automatic audio-conference service

    Energy Technology Data Exchange (ETDEWEB)

    Sierra Moral, Rodrigo, E-mail: Rodrigo.Sierra@cern.c [CERN, IT Department 1211 Geneva-23 (Switzerland)

    2010-04-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  16. CERN automatic audio-conference service

    Science.gov (United States)

    Sierra Moral, Rodrigo

    2010-04-01

    Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN's recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

  17. Biomedical signal and image processing.

    Science.gov (United States)

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  18. Signal processing in microdosimetry

    International Nuclear Information System (INIS)

    Arbel, A.

    1984-01-01

    Signals occurring in microdosimetric measurements cover a dynamic range of 100 dB at a counting rate which normally stays below 10 4 but could increase significantly in case of an accident. The need for high resolution at low energies, non-linear signal processing to accommodate the specified dynamic range, easy calibration and thermal stability are conflicting requirements which pose formidable design problems. These problems are reviewed, and a practical approach to their solution is given employing a single processing channel. (author)

  19. Musical Audio Synthesis Using Autoencoding Neural Nets

    OpenAIRE

    Sarroff, Andy; Casey, Michael A.

    2014-01-01

    With an optimal network topology and tuning of hyperpa-\\ud rameters, artificial neural networks (ANNs) may be trained\\ud to learn a mapping from low level audio features to one\\ud or more higher-level representations. Such artificial neu-\\ud ral networks are commonly used in classification and re-\\ud gression settings to perform arbitrary tasks. In this work\\ud we suggest repurposing autoencoding neural networks as\\ud musical audio synthesizers. We offer an interactive musi-\\ud cal audio synt...

  20. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    Science.gov (United States)

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  1. Hacking the aesthetic: David Haines and Joyce Hinterding's new ecologies of signal

    Directory of Open Access Journals (Sweden)

    Andrew Murphie

    2012-06-01

    Full Text Available Award winning Australian electronic artists David Haines and Joyce Hinterding's installation work is discussed as “signal-work.” Their work reconfigures signal within original assemblages involving subtle audio, high resolution video (both recorded and animated, kilometers of coiled copper wire, antennae or home-built electronics, electrostatic disturbances, the like of very low frequency radiation from the Milky Way, and cross-signal processing. The article develops a context for thinking about the work in terms of Whitehead's process philosophy, as this is relevant to media theory, as well as concepts of plural ecology and ongoing differentiation drawn from Bateson and Guattari. Signal processing is seen as key to all this. The article argues that in Hinterding and Haines’ signal-work new sensations are produced, outside of the normal “syntax” of some models of aesthetic experience. This challenges some aspects of thinking about both aesthetics and political ecology.

  2. Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

    Directory of Open Access Journals (Sweden)

    Marko Helén

    2010-01-01

    Full Text Available This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability density functions (pdfs of their frame-wise acoustic features. Since the features are continuous valued, we propose to model them using Gaussian mixture models (GMMs or hidden Markov models (HMMs. The models parametrize each sample efficiently and retain sufficient information for similarity measurement. To measure the distance between the models, we apply a novel Euclidean distance, approximations of Kullback-Leibler divergence, and a cross-likelihood ratio test. The performance of the measures was tested in simulations where audio samples are automatically retrieved from a general audio database, based on the estimated similarity to a user-provided example. The simulations show that the distance between probability density functions is an accurate measure for similarity. Measures based on GMMs or HMMs are shown to produce better results than that of the existing methods based on simpler statistics or histograms of the features. A good performance with low computational cost is obtained with the proposed Euclidean distance.

  3. Music Genre Classification Using MIDI and Audio Features

    Directory of Open Access Journals (Sweden)

    Abdullah Sonmez

    2007-01-01

    Full Text Available We report our findings on using MIDI files and audio features from MIDI, separately and combined together, for MIDI music genre classification. We use McKay and Fujinaga's 3-root and 9-leaf genre data set. In order to compute distances between MIDI pieces, we use normalized compression distance (NCD. NCD uses the compressed length of a string as an approximation to its Kolmogorov complexity and has previously been used for music genre and composer clustering. We convert the MIDI pieces to audio and then use the audio features to train different classifiers. MIDI and audio from MIDI classifiers alone achieve much smaller accuracies than those reported by McKay and Fujinaga who used not NCD but a number of domain-based MIDI features for their classification. Combining MIDI and audio from MIDI classifiers improves accuracy and gets closer to, but still worse, accuracies than McKay and Fujinaga's. The best root genre accuracies achieved using MIDI, audio, and combination of them are 0.75, 0.86, and 0.93, respectively, compared to 0.98 of McKay and Fujinaga. Successful classifier combination requires diversity of the base classifiers. We achieve diversity through using certain number of seconds of the MIDI file, different sample rates and sizes for the audio file, and different classification algorithms.

  4. Neural networks in signal processing

    International Nuclear Information System (INIS)

    Govil, R.

    2000-01-01

    Nuclear Engineering has matured during the last decade. In research and design, control, supervision, maintenance and production, mathematical models and theories are used extensively. In all such applications signal processing is embedded in the process. Artificial Neural Networks (ANN), because of their nonlinear, adaptive nature are well suited to such applications where the classical assumptions of linearity and second order Gaussian noise statistics cannot be made. ANN's can be treated as nonparametric techniques, which can model an underlying process from example data. They can also adopt their model parameters to statistical change with time. Algorithms in the framework of Neural Networks in Signal processing have found new applications potentials in the field of Nuclear Engineering. This paper reviews the fundamentals of Neural Networks in signal processing and their applications in tasks such as recognition/identification and control. The topics covered include dynamic modeling, model based ANN's, statistical learning, eigen structure based processing and generalization structures. (orig.)

  5. The Build-Up Course of Visuo-Motor and Audio-Motor Temporal Recalibration

    Directory of Open Access Journals (Sweden)

    Yoshimori Sugano

    2011-10-01

    Full Text Available The sensorimotor timing is recalibrated after a brief exposure to a delayed feedback of voluntary actions (temporal recalibration effect: TRE (Heron et al., 2009; Stetson et al., 2006; Sugano et al., 2010. We introduce a new paradigm, namely ‘synchronous tapping’ (ST which allows us to investigate how the TRE builds up during adaptation. In each experimental trial, participants were repeatedly exposed to a constant lag (∼150 ms between their voluntary action (pressing a mouse and a feedback stimulus (a visual flash / an auditory click 10 times. Immediately after that, they performed a ST task with the same stimulus as a pace signal (7 flashes / clicks. A subjective ‘no-delay condition’ (∼50 ms served as control. The TRE manifested itself as a change in the tap-stimulus asynchrony that compensated the exposed lag (eg, after lag adaptation, the tap preceded the stimulus more than in control and built up quickly (∼3–6 trials, ∼23–45 sec in both the visuo- and audio-motor domain. The audio-motor TRE was bigger and built-up faster than the visuo-motor one. To conclude, the TRE is comparable between visuo- and audio-motor domain, though they are slightly different in size and build-up rate.

  6. Current-Driven Switch-Mode Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Knott, Arnold; Buhl, Niels Christian; Andersen, Michael A. E.

    2012-01-01

    The conversion of electrical energy into sound waves by electromechanical transducers is proportional to the current through the coil of the transducer. However virtually all audio power amplifiers provide a controlled voltage through the interface to the transducer. This paper is presenting...... a switch-mode audio power amplifier not only providing controlled current but also being supplied by current. This results in an output filter size reduction by a factor of 6. The implemented prototype shows decent audio performance with THD + N below 0.1 %....

  7. Radiation signal processing system

    International Nuclear Information System (INIS)

    Bennett, M.; Knoll, G.; Strange, D.

    1980-01-01

    An improved signal processing system for radiation imaging apparatus comprises: a radiation transducer producing transducer signals proportional to apparent spatial coordinates of detected radiation events; means for storing true spatial coordinates corresponding to a plurality of predetermined apparent spatial coordinates relative to selected detected radiation events said means for storing responsive to said transducer signal and producing an output signal representative of said true spatial coordinates; and means for interpolating the true spatial coordinates of the detected radiation events located intermediate the stored true spatial coordinates, said means for interpolating communicating with said means for storing

  8. Audio Mining with emphasis on Music Genre Classification

    DEFF Research Database (Denmark)

    Meng, Anders

    2004-01-01

    Audio is an important part of our daily life, basically it increases our impression of the world around us whether this is communication, music, danger detection etc. Currently the field of Audio Mining, which here includes areas of music genre, music recognition / retrieval, playlist generation...... the world the problem of detecting environments from the input audio is researched as to increase the life quality of hearing-impaired. Basically there is a lot of work within the field of audio mining. The presentation will mainly focus on music genre classification where we have a fixed amount of genres...... to choose from. Basically every audio mining system is more or less consisting of the same stages as for the music genre setting. My research so far has mainly focussed on finding relevant features for music genre classification living at different timescales using early and late information fusion. It has...

  9. Subjective and Objective Assessment of Perceived Audio Quality of Current Digital Audio Broadcasting Systems and Web-Casting Applications

    NARCIS (Netherlands)

    Pocta, P.; Beerends, J.G.

    2015-01-01

    This paper investigates the impact of different audio codecs typically deployed in current digital audio broadcasting (DAB) systems and web-casting applications, which represent a main source of quality impairment in these systems and applications, on the quality perceived by the end user. Both

  10. A 115dB-DR Audio DAC with –61dBFS out-of-band noise

    NARCIS (Netherlands)

    Westerveld, H.J.; Schinkel, Daniel; van Tuijl, Adrianus Johannes Maria

    2015-01-01

    Out-of-band noise (OBN) is troublesome in analog circuits that process the output of a noise-shaping audio DAC. It causes slewing in amplifiers and aliasing in sampling circuits like ADCs and class-D amplifiers. Nonlinearity in these circuits also causes cross-modulation of the OBN into the audio

  11. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    Science.gov (United States)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution

  13. Fusion of audio and visual cues for laughter detection

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audio- visual approach to distinguishing laughter from speech and we show that integrating the information from audio and video channels leads to improved performance over single-modal

  14. Perceptual Audio Hashing Functions

    Directory of Open Access Journals (Sweden)

    Emin Anarım

    2005-07-01

    Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.

  15. A signal theoretic introduction to random processes

    CERN Document Server

    Howard, Roy M

    2015-01-01

    A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features:  A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of

  16. Audio localization for mobile robots

    OpenAIRE

    de Guillebon, Thibaut; Grau Saldes, Antoni; Bolea Monte, Yolanda

    2009-01-01

    The department of the University for which I worked is developing a project based on the interaction with robots in the environment. My work was to define an audio system for the robot. This audio system that I have to realize consists on a mobile head which is able to follow the sound in its environment. This subject was treated as a research problem, with the liberty to find and develop different solutions and make them evolve in the chosen way.

  17. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  18. Digital circuit for the introduction and later removal of dither from an analog signal

    Science.gov (United States)

    Borgen, Gary S.

    1994-05-01

    An electronics circuit is presented for accurately digitizing an analog audio or like data signal into a digital equivalent signal by introducing dither into the analog signal and then subsequently removing the dither from the digitized signal prior to its conversion to an analog signal which is a substantial replica of the incoming analog audio or like data signal. The electronics circuit of the present invention is characterized by a first pseudo-random number generator which generates digital random noise signals or dither for addition to the digital equivalent signal and a second pseudo-random number generator which generates subtractive digital random noise signals for the subsequent removal of dither from the digital equivalent signal prior its conversion to the analog replica signal.

  19. EVALUASI KEPUASAN PENGGUNA TERHADAP APLIKASI AUDIO BOOKS

    Directory of Open Access Journals (Sweden)

    Raditya Maulana Anuraga

    2017-02-01

    Full Text Available Listeno is the first application audio books in Indonesia so that the users can get the book in audio form like listen to music, Listeno have problems in a feature request Listeno offline mode that have not been released, a security problem mp3 files that must be considered, and the target Listeno not yet reached 100,000 active users. This research has the objective to evaluate user satisfaction to Audio Books with research method approach, Nielsen. The analysis in this study using Importance Performance Analysis (IPA is combined with the index of User Satisfaction (IKP based on the indicators used are: Benefit (Usefulness, Utility (Utility, Usability (Usability, easy to understand (Learnability, Efficient (efficiency , Easy to remember (Memorability, Error (Error, and satisfaction (satisfaction. The results showed Applications User Satisfaction Audio books are quite satisfied with the results of the calculation IKP 69.58%..

  20. The Sweet-Home project: audio processing and decision making in smart home to improve well-being and reliance.

    Science.gov (United States)

    Vacher, Michel; Chahuara, Pedro; Lecouteux, Benjamin; Istrate, Dan; Portet, Francois; Joubert, Thierry; Sehili, Mohamed; Meillon, Brigitte; Bonnefond, Nicolas; Fabre, Sébastien; Roux, Camille; Caffiau, Sybille

    2013-01-01

    The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the implemented techniques for speech and sound recognition as context-aware decision making with uncertainty. A user experiment in a smart home demonstrates the interest of this audio-based technology.

  1. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  2. Use of sparker signal to classify seafloor sediment

    Digital Repository Service at National Institute of Oceanography (India)

    Pathak, D.; Ranade, G.; Sudhakar, T.

    During the cruise 190 of R.V. Gaveshani, the sparker signal was recorded in the analog form on audio cassettes. This signal has been digitized and a statistical computation, viz. the normalized cross-correlation function between successive echoes...

  3. Removable Watermarking Sebagai Pengendalian Terhadap Cyber Crime Pada Audio Digital

    Directory of Open Access Journals (Sweden)

    Reyhani Lian Putri

    2017-08-01

    Full Text Available Perkembangan teknologi informasi yang pesat menuntut penggunanya untuk lebih berhati-hati seiring semakin meningkatnya cyber crime.Banyak pihak telah mengembangkan berbagai teknik perlindungan data digital, salah satunya adalah watermarking. Teknologi watermarking berfungsi untuk memberikan identitas, melindungi, atau menandai data digital, baik audio, citra, ataupun video, yang mereka miliki. Akan tetapi, teknik tersebut masih dapat diretas oleh oknum-oknum yang tidak bertanggung jawab.Pada penelitian ini, proses watermarking diterapkan pada audio digital dengan menyisipkan watermark yang terdengar jelas oleh indera pendengaran manusia (perceptible pada audio host.Hal ini bertujuan agar data audio dapat terlindungi dan apabila ada pihak lain yang ingin mendapatkan data audio tersebut harus memiliki “kunci” untuk menghilangkan watermark. Proses removable watermarking ini dilakukan pada data watermark yang sudah diketahui metode penyisipannya, agar watermark dapat dihilangkan sehingga kualitas audio menjadi lebih baik. Dengan menggunakan metode ini diperoleh kinerja audio watermarking pada nilai distorsi tertinggi dengan rata-rata nilai SNR sebesar7,834 dB dan rata-rata nilai ODG sebesar -3,77.Kualitas audio meningkat setelah watermark dihilangkan, di mana rata-rata SNR menjadi sebesar 24,986 dB dan rata-rata ODG menjadi sebesar -1,064 serta nilai MOS sebesar 4,40.

  4. Advanced optical signal processing of broadband parallel data signals

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Hu, Hao; Kjøller, Niels-Kristian

    2016-01-01

    Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration.......Optical signal processing may aid in reducing the number of active components in communication systems with many parallel channels, by e.g. using telescopic time lens arrangements to perform format conversion and allow for WDM regeneration....

  5. A listening test system for automotive audio

    DEFF Research Database (Denmark)

    Christensen, Flemming; Geoff, Martin; Minnaar, Pauli

    2005-01-01

    This paper describes a system for simulating automotive audio through headphones for the purposes of conducting listening experiments in the laboratory. The system is based on binaural technology and consists of a component for reproducing the sound of the audio system itself and a component...

  6. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  7. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  8. Parametric Packet-Layer Model for Evaluation Audio Quality in Multimedia Streaming Services

    Science.gov (United States)

    Egi, Noritsugu; Hayashi, Takanori; Takahashi, Akira

    We propose a parametric packet-layer model for monitoring audio quality in multimedia streaming services such as Internet protocol television (IPTV). This model estimates audio quality of experience (QoE) on the basis of quality degradation due to coding and packet loss of an audio sequence. The input parameters of this model are audio bit rate, sampling rate, frame length, packet-loss frequency, and average burst length. Audio bit rate, packet-loss frequency, and average burst length are calculated from header information in received IP packets. For sampling rate, frame length, and audio codec type, the values or the names used in monitored services are input into this model directly. We performed a subjective listening test to examine the relationships between these input parameters and perceived audio quality. The codec used in this test was the Advanced Audio Codec-Low Complexity (AAC-LC), which is one of the international standards for audio coding. On the basis of the test results, we developed an audio quality evaluation model. The verification results indicate that audio quality estimated by the proposed model has a high correlation with perceived audio quality.

  9. Audio power amplifier design handbook

    CERN Document Server

    Self, Douglas

    2013-01-01

    This book is essential for audio power amplifier designers and engineers for one simple reason...it enables you as a professional to develop reliable, high-performance circuits. The Author Douglas Self covers the major issues of distortion and linearity, power supplies, overload, DC-protection and reactive loading. He also tackles unusual forms of compensation and distortion produced by capacitors and fuses. This completely updated fifth edition includes four NEW chapters including one on The XD Principle, invented by the author, and used by Cambridge Audio. Cro

  10. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  11. Digital Signal Processing applied to Physical Signals

    CERN Document Server

    Alberto, Diego; Musa, L

    2011-01-01

    It is well known that many of the scientific and technological discoveries of the XXI century will depend on the capability of processing and understanding a huge quantity of data. With the advent of the digital era, a fully digital and automated treatment can be designed and performed. From data mining to data compression, from signal elaboration to noise reduction, a processing is essential to manage and enhance features of interest after every data acquisition (DAQ) session. In the near future, science will go towards interdisciplinary research. In this work there will be given an example of the application of signal processing to different fields of Physics from nuclear particle detectors to biomedical examinations. In Chapter 1 a brief description of the collaborations that allowed this thesis is given, together with a list of the publications co-produced by the author in these three years. The most important notations, definitions and acronyms used in the work are also provided. In Chapter 2, the last r...

  12. Use of Effective Audio in E-learning Courseware

    OpenAIRE

    Ray, Kisor

    2015-01-01

    E-Learning uses electronic media, information & communication technologies to provide education to the masses. E-learning deliver hypertext, text, audio, images, animation and videos using desktop standalone computer, local area network based intranet and internet based contents. While producing an e-learning content or course-ware, a major decision making factor is whether to use audio for the benefit of the end users. Generally, three types of audio can be used in e-learning: narration, mus...

  13. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  14. Tune in the Net with RealAudio.

    Science.gov (United States)

    Buchanan, Larry

    1997-01-01

    Describes how to connect to the RealAudio Web site to download a player that provides sound from Web pages to the computer through streaming technology. Explains hardware and software requirements and provides addresses for other RealAudio Web sites are provided, including weather information and current news. (LRW)

  15. Electronic devices for analog signal processing

    CERN Document Server

    Rybin, Yu K

    2012-01-01

    Electronic Devices for Analog Signal Processing is intended for engineers and post graduates and considers electronic devices applied to process analog signals in instrument making, automation, measurements, and other branches of technology. They perform various transformations of electrical signals: scaling, integration, logarithming, etc. The need in their deeper study is caused, on the one hand, by the extension of the forms of the input signal and increasing accuracy and performance of such devices, and on the other hand, new devices constantly emerge and are already widely used in practice, but no information about them are written in books on electronics. The basic approach of presenting the material in Electronic Devices for Analog Signal Processing can be formulated as follows: the study with help from self-education. While divided into seven chapters, each chapter contains theoretical material, examples of practical problems, questions and tests. The most difficult questions are marked by a diamon...

  16. Analysis, Synthesis, and Classification of Nonlinear Systems Using Synchronized Swept-Sine Method for Audio Effects

    Directory of Open Access Journals (Sweden)

    Novak Antonin

    2010-01-01

    Full Text Available A new method of identification, based on an input synchronized exponential swept-sine signal, is used to analyze and synthesize nonlinear audio systems like overdrive pedals for guitar. Two different pedals are studied; the first one exhibiting a strong influence of the input signal level on its input/output law and the second one exhibiting a weak influence of this input signal level. The Synchronized Swept Sine method leads to a Generalized Polynomial Hammerstein model equivalent to the pedals under test. The behaviors of both pedals are illustrated through model-based resynthesized signals. Moreover, it is also shown that this method leads to a criterion allowing the classification of the nonlinear systems under test, according to the influence of the input signal levels on their input/output law.

  17. Handbook of signal processing systems

    CERN Document Server

    Deprettere, Ed; Leupers, Rainer; Takala, Jarmo

    2013-01-01

    Handbook of Signal Processing Systems is organized in three parts. The first part motivates representative applications that drive and apply state-of-the art methods for design and implementation of signal processing systems; the second part discusses architectures for implementing these applications; the third part focuses on compilers and simulation tools, describes models of computation and their associated design tools and methodologies. This handbook is an essential tool for professionals in many fields and researchers of all levels.

  18. Development of an Ontology-Directed Signal Processing Toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Stephen W. Lang

    2011-05-27

    This project was focused on the development of tools for the automatic configuration of signal processing systems. The goal is to develop tools that will be useful in a variety of Government and commercial areas and useable by people who are not signal processing experts. In order to get the most benefit from signal processing techniques, deep technical expertise is often required in order to select appropriate algorithms, combine them into a processing chain, and tune algorithm parameters for best performance on a specific problem. Therefore a significant benefit would result from the assembly of a toolbox of processing algorithms that has been selected for their effectiveness in a group of related problem areas, along with the means to allow people who are not signal processing experts to reliably select, combine, and tune these algorithms to solve specific problems. Defining a vocabulary for problem domain experts that is sufficiently expressive to drive the configuration of signal processing functions will allow the expertise of signal processing experts to be captured in rules for automated configuration. In order to test the feasibility of this approach, we addressed a lightning classification problem, which was proposed by DOE as a surrogate for problems encountered in nuclear nonproliferation data processing. We coded a toolbox of low-level signal processing algorithms for extracting features of RF waveforms, and demonstrated a prototype tool for screening data. We showed examples of using the tool for expediting the generation of ground-truth metadata, for training a signal recognizer, and for searching for signals with particular characteristics. The public benefits of this approach, if successful, will accrue to Government and commercial activities that face the same general problem - the development of sensor systems for complex environments. It will enable problem domain experts (e.g. analysts) to construct signal and image processing chains without

  19. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  20. Determination of over current protection thresholds for class D audio amplifiers

    DEFF Research Database (Denmark)

    Nyboe, Flemming; Risbo, L; Andreani, Pietro

    2005-01-01

    Monolithic class-D audio amplifiers typically feature built-in over current protection circuitry that shuts down the amplifier in case of a short circuit on the output speaker terminals. To minimize cost, the threshold at which the device shuts down must be set just above the maximum current...... that can flow in the loudspeaker during normal operation. The current required is determined by the complex loudspeaker impedance and properties of the music signals played. This work presents a statistical analysis of peak output currents when playing music on typical loudspeakers for home entertainment....

  1. Estimation of violin bowing features from Audio recordings with Convolutional Networks

    DEFF Research Database (Denmark)

    Perez-Carillo, Alfonso; Purwins, Hendrik

    The acquisition of musical gestures and particularly of instrument controls from a musical performance is a field of increasing interest with applications in many research areas. In the last years, the development of novel sensing technologies has allowed the fine measurement of such controls...... and low-cost of the acquisition and its nonintrusive nature. The main challenge is designing robust detection algorithms to be as accurate as the direct approaches. In this paper, we present an indirect acquisition method to estimate violin bowing controls from audio signal analysis based on training...

  2. Advances in heuristic signal processing and applications

    CERN Document Server

    Chatterjee, Amitava; Siarry, Patrick

    2013-01-01

    There have been significant developments in the design and application of algorithms for both one-dimensional signal processing and multidimensional signal processing, namely image and video processing, with the recent focus changing from a step-by-step procedure of designing the algorithm first and following up with in-depth analysis and performance improvement to instead applying heuristic-based methods to solve signal-processing problems. In this book the contributing authors demonstrate both general-purpose algorithms and those aimed at solving specialized application problems, with a spec

  3. Efficiency Optimization in Class-D Audio Amplifiers

    DEFF Research Database (Denmark)

    Yamauchi, Akira; Knott, Arnold; Jørgensen, Ivan Harald Holger

    2015-01-01

    This paper presents a new power efficiency optimization routine for designing Class-D audio amplifiers. The proposed optimization procedure finds design parameters for the power stage and the output filter, and the optimum switching frequency such that the weighted power losses are minimized under...... the given constraints. The optimization routine is applied to minimize the power losses in a 130 W class-D audio amplifier based on consumer behavior investigations, where the amplifier operates at idle and low power levels most of the time. Experimental results demonstrate that the optimization method can...... lead to around 30 % of efficiency improvement at 1.3 W output power without significant effects on both audio performance and the efficiency at high power levels....

  4. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  5. Portable Audio Design

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2014-01-01

    attention to the specific genre; a grasping of the complex relationship between site and time, the actual and the virtual; and getting aquatint with the specific site’s soundscape by approaching it both intuitively and systematically. These steps will finally lead to an audio production that not only...

  6. Digital Signal Processing For Low Bit Rate TV Image Codecs

    Science.gov (United States)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  7. AUDIO CRYPTANALYSIS- AN APPLICATION OF SYMMETRIC KEY CRYPTOGRAPHY AND AUDIO STEGANOGRAPHY

    Directory of Open Access Journals (Sweden)

    Smita Paira

    2016-09-01

    Full Text Available In the recent trend of network and technology, “Cryptography” and “Steganography” have emerged out as the essential elements of providing network security. Although Cryptography plays a major role in the fabrication and modification of the secret message into an encrypted version yet it has certain drawbacks. Steganography is the art that meets one of the basic limitations of Cryptography. In this paper, a new algorithm has been proposed based on both Symmetric Key Cryptography and Audio Steganography. The combination of a randomly generated Symmetric Key along with LSB technique of Audio Steganography sends a secret message unrecognizable through an insecure medium. The Stego File generated is almost lossless giving a 100 percent recovery of the original message. This paper also presents a detailed experimental analysis of the algorithm with a brief comparison with other existing algorithms and a future scope. The experimental verification and security issues are promising.

  8. An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.

    Science.gov (United States)

    Albert, Richard N.

    Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…

  9. Fall Detection Using Smartphone Audio Features.

    Science.gov (United States)

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  10. A digital input class-D audio amplifier with sixth-order PWM

    International Nuclear Information System (INIS)

    Luo Shumeng; Li Dongmei

    2013-01-01

    A digital input class-D audio amplifier with a sixth-order pulse-width modulation (PWM) modulator is presented. This modulator moves the PWM generator into the closed sigma—delta modulator loop. The noise and distortions generated at the PWM generator module are suppressed by the high gain of the forward loop of the sigma—delta modulator. Therefore, at the output of the modulator, a very clean PWM signal is acquired for driving the power stage of the class-D amplifier. A sixth-order modulator is designed to balance the performance and the system clock speed. Fabricated in standard 0.18 μm CMOS technology, this class-D amplifier achieves 110 dB dynamic range, 100 dB signal-to-noise rate, and 0.0056% total harmonic distortion plus noise. (semiconductor integrated circuits)

  11. Penguat Audio Kelas D dengan Umpan Balik Tipe Butterworth

    Directory of Open Access Journals (Sweden)

    Gunawan Dewantoro

    2016-03-01

    Full Text Available A class D amplifier would, in ideal sense, amplify signals without any noises and distortions which yield 100% efficiency and 0% Total Harmonic Distortion (THD. However, class D amplifiers have some drawbacks that lead to nonlinearity and increasing THD. Therefore, a feedback mechanism was employed to enhance THD performance of amplifier. Some feedback techniques have been using first order filter in the feedback path to retrieve audio signals. This research proposed a second order filter with Butterworth approach. A power amplifier was realized using full-bridge amplifier with MOSFETs to provide greater power. This class D amplifier was designed to meet following specifications: maximum output power up to 32.6 W with an 8 Ω load, sensitivity of 90 mV/W, frequency response ranging from 20 Hz – 20 kHz with tolerance ± 1 dB, THD as low as 1.1 %, SNR up to 90.16 dB, and efficiency of 82.1 %.

  12. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  13. Audio-Tutorial Instruction: A Strategy For Teaching Introductory College Geology.

    Science.gov (United States)

    Fenner, Peter; Andrews, Ted F.

    The rationale of audio-tutorial instruction is discussed, and the history and development of the audio-tutorial botany program at Purdue University is described. Audio-tutorial programs in geology at eleven colleges and one school are described, illustrating several ways in which programs have been developed and integrated into courses. Programs…

  14. Deep learning, audio adversaries, and music content analysis

    DEFF Research Database (Denmark)

    Kereliuk, Corey Mose; Sturm, Bob L.; Larsen, Jan

    2015-01-01

    We present the concept of adversarial audio in the context of deep neural networks (DNNs) for music content analysis. An adversary is an algorithm that makes minor perturbations to an input that cause major repercussions to the system response. In particular, we design an adversary for a DNN...... that takes as input short-time spectral magnitudes of recorded music and outputs a high-level music descriptor. We demonstrate how this adversary can make the DNN behave in any way with only extremely minor changes to the music recording signal. We show that the adversary cannot be neutralised by a simple...... filtering of the input. Finally, we discuss adversaries in the broader context of the evaluation of music content analysis systems....

  15. A Modified Adaptive Stochastic Resonance for Detecting Faint Signal in Sensors

    Directory of Open Access Journals (Sweden)

    Hengwei Li

    2007-02-01

    Full Text Available In this paper, an approach is presented to detect faint signals with strong noises in sensors by stochastic resonance (SR. We adopt the power spectrum as the evaluation tool of SR, which can be obtained by the fast Fourier transform (FFT. Furthermore, we introduce the adaptive filtering scheme to realize signal processing automatically. The key of the scheme is how to adjust the barrier height to satisfy the optimal condition of SR in the presence of any input. For the given input signal, we present an operable procedure to execute the adjustment scheme. An example utilizing one audio sensor to detect the fault information from the power supply is given. Simulation results show that th

  16. Nonlinear filtering for LIDAR signal processing

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available LIDAR (Laser Integrated Radar is an engineering problem of great practical importance in environmental monitoring sciences. Signal processing for LIDAR applications involves highly nonlinear models and consequently nonlinear filtering. Optimal nonlinear filters, however, are practically unrealizable. In this paper, the Lainiotis's multi-model partitioning methodology and the related approximate but effective nonlinear filtering algorithms are reviewed and applied to LIDAR signal processing. Extensive simulation and performance evaluation of the multi-model partitioning approach and its application to LIDAR signal processing shows that the nonlinear partitioning methods are very effective and significantly superior to the nonlinear extended Kalman filter (EKF, which has been the standard nonlinear filter in past engineering applications.

  17. Application of wavelet transform in seismic signal processing

    International Nuclear Information System (INIS)

    Ghasemi, M. R.; Mohammadzadeh, A.; Salajeghe, E.

    2005-01-01

    Wavelet transform is a new tool for signal analysis which can perform a simultaneous signal time and frequency representations. Under Multi Resolution Analysis, one can quickly determine details for signals and their properties using Fast Wavelet Transform algorithms. In this paper, for a better physical understanding of a signal and its basic algorithms, Multi Resolution Analysis together with wavelet transforms in a form of Digital Signal Processing will be discussed. For a Seismic Signal Processing, sets of Orthonormal Daubechies Wavelets are suggested. when dealing with the application of wavelets in SSP, one may discuss about denoising from the signal and data compression existed in the signal, which is important in seismic signal data processing. Using this techniques, EL-Centro and Nagan signals were remodeled with a 25% of total points, resulted in a satisfactory results with an acceptable error drift. Thus a total of 1559 and 2500 points for EL-centro and Nagan seismic curves each, were reduced to 389 and 625 points respectively, with a very reasonable error drift, details of which are recorded in the paper. Finally, the future progress in signal processing, based on wavelet theory will be appointed

  18. Signal processing for cognitive radios

    CERN Document Server

    Jayaweera, Sudharman K

    2014-01-01

    This book covers power electronics, in depth, by presenting the basic principles and application details, and it can be used both as a textbook and reference book.  Introduces the specific type of CR that has gained the most research attention in recent years: the CR for Dynamic Spectrum Access (DSA). Provides signal processing solutions to each task by relating the tasks to materials covered in Part II. Specialized chapters then discuss specific signal processing algorithms required for DSA and DSS cognitive radios  

  19. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  20. Using online handwriting and audio streams for mathematical expressions recognition: a bimodal approach

    Science.gov (United States)

    Medjkoune, Sofiane; Mouchère, Harold; Petitrenaud, Simon; Viard-Gaudin, Christian

    2013-01-01

    The work reported in this paper concerns the problem of mathematical expressions recognition. This task is known to be a very hard one. We propose to alleviate the difficulties by taking into account two complementary modalities. The modalities referred to are handwriting and audio ones. To combine the signals coming from both modalities, various fusion methods are explored. Performances evaluated on the HAMEX dataset show a significant improvement compared to a single modality (handwriting) based system.

  1. Signal processing in noise waveform radar

    CERN Document Server

    Kulpa, Krzysztof

    2013-01-01

    This book is devoted to the emerging technology of noise waveform radar and its signal processing aspects. It is a new kind of radar, which use noise-like waveform to illuminate the target. The book includes an introduction to basic radar theory, starting from classical pulse radar, signal compression, and wave radar. The book then discusses the properties, difficulties and potential of noise radar systems, primarily for low-power and short-range civil applications. The contribution of modern signal processing techniques to making noise radar practical are emphasized, and application examples

  2. Processing of acoustic signal in rock desintegration

    Directory of Open Access Journals (Sweden)

    Futó Jozef

    2002-12-01

    Full Text Available For the determination of an effective rock disintegration for a given tool and rock type it is needed to define an optimal disintegration regime. Optimisation of the disintegration process by drilling denotes the finding out an appropriate couple of input parameters of disintegration, i.e. the thrust and revolutions for a quasi-equal rock environment. The disintegration process can be optimised to reach the maximum immediate drilling rate, to reach the minimum specific disintegration energy or to reach the maximum ratio of immediate drilling rate and specific disintegration energy. For the determination of the optimal thrust and revolutions it is needed to monitor the disintegration process. Monitoring of the disintegration process in real conditions is complicated by unfavourable factors, such as the presence of water, dust, vibrations etc. Following our present experience in the monitoring of drilling or full-profile driving, we try to replace the monitoring of input values by monitoring of the scanned acoustic signal. This method of monitoring can extend the optimisation of disintegration process in the technical practice. Its advantage consists in the registration of one acoustic signal by an appropriate microphone. Monitoring of acoustic signal is used also in monitoring of metal machining by milling and turning jobs. The research results of scanning of the acoustic signal in machining of metals are encouraging. Acoustic signal can be processed by different statistical parameters. The paper decribes some results of monitoring of the acoustic signal in rock disintegration on the drilling stand of the Institute of Geotechnics SAS in Košice. The acoustic signal has been registered and processed in no-load run of electric motor, in no-load run of electric motor with a drilling fluid, and in the Ruskov andesite drilling. Registration and processing of the acoustic signal is solved as a part of the research grant task within the basic research

  3. Ultrafast Nonlinear Signal Processing in Silicon Waveguides

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Mulvad, Hans Christian Hansen; Hu, Hao

    2012-01-01

    We describe recent demonstrations of exploiting highly nonlinear silicon waveguides for ultrafast optical signal processing. We describe wavelength conversion and serial-to-parallel conversion of 640 Gbit/s data signals and 1.28 Tbit/s demultiplexing and all-optical sampling.......We describe recent demonstrations of exploiting highly nonlinear silicon waveguides for ultrafast optical signal processing. We describe wavelength conversion and serial-to-parallel conversion of 640 Gbit/s data signals and 1.28 Tbit/s demultiplexing and all-optical sampling....

  4. Radar signal analysis and processing using Matlab

    CERN Document Server

    Mahafza, Bassem R

    2008-01-01

    Offering radar-related software for the analysis and design of radar waveform and signal processing, this book provides comprehensive coverage of radar signals and signal processing techniques and algorithms. It contains numerous graphical plots, common radar-related functions, table format outputs, and end-of-chapter problems. The complete set of MATLAB[registered] functions and routines are available for download online.

  5. Invariance algorithms for processing NDE signals

    Science.gov (United States)

    Mandayam, Shreekanth; Udpa, Lalita; Udpa, Satish S.; Lord, William

    1996-11-01

    Signals that are obtained in a variety of nondestructive evaluation (NDE) processes capture information not only about the characteristics of the flaw, but also reflect variations in the specimen's material properties. Such signal changes may be viewed as anomalies that could obscure defect related information. An example of this situation occurs during in-line inspection of gas transmission pipelines. The magnetic flux leakage (MFL) method is used to conduct noninvasive measurements of the integrity of the pipe-wall. The MFL signals contain information both about the permeability of the pipe-wall and the dimensions of the flaw. Similar operational effects can be found in other NDE processes. This paper presents algorithms to render NDE signals invariant to selected test parameters, while retaining defect related information. Wavelet transform based neural network techniques are employed to develop the invariance algorithms. The invariance transformation is shown to be a necessary pre-processing step for subsequent defect characterization and visualization schemes. Results demonstrating the successful application of the method are presented.

  6. Improvements of ModalMax High-Fidelity Piezoelectric Audio Device

    Science.gov (United States)

    Woodard, Stanley E.

    2005-01-01

    ModalMax audio speakers have been enhanced by innovative means of tailoring the vibration response of thin piezoelectric plates to produce a high-fidelity audio response. The ModalMax audio speakers are 1 mm in thickness. The device completely supplants the need to have a separate driver and speaker cone. ModalMax speakers can perform the same applications of cone speakers, but unlike cone speakers, ModalMax speakers can function in harsh environments such as high humidity or extreme wetness. New design features allow the speakers to be completely submersed in salt water, making them well suited for maritime applications. The sound produced from the ModalMax audio speakers has sound spatial resolution that is readily discernable for headset users.

  7. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  8. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  9. PSpice for digital signal processing

    CERN Document Server

    Tobin, Paul

    2007-01-01

    PSpice for Digital Signal Processing is the last in a series of five books using Cadence Orcad PSpice version 10.5 and introduces a very novel approach to learning digital signal processing (DSP). DSP is traditionally taught using Matlab/Simulink software but has some inherent weaknesses for students particularly at the introductory level. The 'plug in variables and play' nature of these software packages can lure the student into thinking they possess an understanding they don't actually have because these systems produce results quicklywithout revealing what is going on. However, it must be

  10. Mixing audio concepts, practices and tools

    CERN Document Server

    Izhaki, Roey

    2013-01-01

    Your mix can make or break a record, and mixing is an essential catalyst for a record deal. Professional engineers with exceptional mixing skills can earn vast amounts of money and find that they are in demand by the biggest acts. To develop such skills, you need to master both the art and science of mixing. The new edition of this bestselling book offers all you need to know and put into practice in order to improve your mixes. Covering the entire process --from fundamental concepts to advanced techniques -- and offering a multitude of audio samples, tips and tricks, this boo

  11. Multisensory and Modality Specific Processing of Visual Speech in Different Regions of the Premotor Cortex

    Directory of Open Access Journals (Sweden)

    Daniel eCallan

    2014-05-01

    Full Text Available Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex has been shown to be active during both observation and execution of action (‘Mirror System’ properties, and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI study, participants identified vowels produced by a speaker in audio-visual (saw the speaker’s articulating face and heard her voice, visual only (only saw the speaker’s articulating face, and audio only (only heard the speaker’s voice conditions with varying audio signal-to-noise ratios in order to determine the regions of the premotor cortex involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the fMRI analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and premotor cortex. The left ventral inferior premotor cortex showed properties of multimodal (audio-visual enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the premotor cortex are involved with mapping unimodal (in this case visual sensory features of the speech signal with

  12. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    Science.gov (United States)

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  13. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  14. Signal processing for boiling noise detection

    International Nuclear Information System (INIS)

    Ledwidge, T.J.; Black, J.L.

    1989-01-01

    The present paper deals with investigations of acoustic signals from a boiling experiment performed on the KNS I loop at KfK Karlsruhe. Signals have been analysed in frequency as well as in time domain. Signal characteristics successfully used to detect the boiling process have been found in time domain. (author). 6 refs, figs

  15. Proceedings of the IEEE Machine Learning for Signal Processing XVII

    DEFF Research Database (Denmark)

    The seventeenth of a series of workshops sponsored by the IEEE Signal Processing Society and organized by the Machine Learning for Signal Processing Technical Committee (MLSP-TC). The field of machine learning has matured considerably in both methodology and real-world application domains and has...... become particularly important for solution of problems in signal processing. As reflected in this collection, machine learning for signal processing combines many ideas from adaptive signal/image processing, learning theory and models, and statistics in order to solve complex real-world signal processing......, and two papers from the winners of the Data Analysis Competition. The program included papers in the following areas: genomic signal processing, pattern recognition and classification, image and video processing, blind signal processing, models, learning algorithms, and applications of machine learning...

  16. Acoustic MIMO signal processing

    CERN Document Server

    Huang, Yiteng; Chen, Jingdong

    2006-01-01

    A timely and important book addressing a variety of acoustic signal processing problems under multiple-input multiple-output (MIMO) scenarios. It uniquely investigates these problems within a unified framework offering a novel and penetrating analysis.

  17. Auto-Associative Recurrent Neural Networks and Long Term Dependencies in Novelty Detection for Audio Surveillance Applications

    Science.gov (United States)

    Rossi, A.; Montefoschi, F.; Rizzo, A.; Diligenti, M.; Festucci, C.

    2017-10-01

    Machine Learning applied to Automatic Audio Surveillance has been attracting increasing attention in recent years. In spite of several investigations based on a large number of different approaches, little attention had been paid to the environmental temporal evolution of the input signal. In this work, we propose an exploration in this direction comparing the temporal correlations extracted at the feature level with the one learned by a representational structure. To this aim we analysed the prediction performances of a Recurrent Neural Network architecture varying the length of the processed input sequence and the size of the time window used in the feature extraction. Results corroborated the hypothesis that sequential models work better when dealing with data characterized by temporal order. However, so far the optimization of the temporal dimension remains an open issue.

  18. Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration.

    Science.gov (United States)

    Doehrmann, Oliver; Naumer, Marcus J

    2008-11-25

    By using meaningful stimuli, multisensory research has recently started to investigate the impact of stimulus content on crossmodal integration. Variations in this respect have often been termed as "semantic". In this paper we will review work related to the question for which tasks the influence of semantic factors has been found and which cortical networks are most likely to mediate these effects. More specifically, the focus of this paper will be on processing of object stimuli presented in the auditory and visual sensory modalities. Furthermore, we will investigate which cortical regions are particularly responsive to experimental variations of content by comparing semantically matching ("congruent") and mismatching ("incongruent") experimental conditions. In this context, recent neuroimaging studies point toward a possible functional differentiation of temporal and frontal cortical regions, with the former being more responsive to semantically congruent and the latter to semantically incongruent audio-visual (AV) stimulation. To account for these differential effects, we will suggest in the final section of this paper a possible synthesis of these data on semantic modulation of AV integration with findings from neuroimaging studies and theoretical accounts of semantic memory.

  19. Sounding ruins: reflections on the production of an ‘audio drift’

    Science.gov (United States)

    Gallagher, Michael

    2014-01-01

    This article is about the use of audio media in researching places, which I term ‘audio geography’. The article narrates some episodes from the production of an ‘audio drift’, an experimental environmental sound work designed to be listened to on a portable MP3 player whilst walking in a ruinous landscape. Reflecting on how this work functions, I argue that, as well as representing places, audio geography can shape listeners’ attention and bodily movements, thereby reworking places, albeit temporarily. I suggest that audio geography is particularly apt for amplifying the haunted and uncanny qualities of places. I discuss some of the issues raised for research ethics, epistemology and spectral geographies. PMID:29708107

  20. Interactive 3D audio: Enhancing awareness of details in immersive soundscapes?

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Schwartz, Stephen; Larsen, Jan

    2012-01-01

    Spatial audio and the possibility of interacting with the audio environment is thought to increase listeners' attention to details in a soundscape. This work examines if interactive 3D audio enhances listeners' ability to recall details in a soundscape. Nine different soundscapes were constructed...

  1. Process Dissociation and Mixture Signal Detection Theory

    Science.gov (United States)

    DeCarlo, Lawrence T.

    2008-01-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely…

  2. IELTS speaking instruction through audio/voice conferencing

    Directory of Open Access Journals (Sweden)

    Hamed Ghaemi

    2012-02-01

    Full Text Available The currentstudyaimsatinvestigatingtheimpactofAudio/Voiceconferencing,asanewapproachtoteaching speaking, on the speakingperformanceand/orspeakingband score ofIELTScandidates.Experimentalgroupsubjectsparticipated in an audio conferencing classwhile those of the control group enjoyed attending in a traditional IELTS Speakingclass. At the endofthestudy,allsubjectsparticipatedinanIELTSExaminationheldonNovemberfourthin Tehran,Iran.To compare thegroupmeansforthestudy,anindependentt-testanalysiswasemployed.Thedifferencebetween experimental and control groupwasconsideredtobestatisticallysignificant(P<0.01.Thatisthecandidates in experimental group have outperformed the ones in control group in IELTS Speaking test scores.

  3. The bag-of-frames approach to audio pattern recognition: a sufficient model for urban soundscapes but not for polyphonic music.

    Science.gov (United States)

    Aucouturier, Jean-Julien; Defreville, Boris; Pachet, François

    2007-08-01

    The "bag-of-frames" approach (BOF) to audio pattern recognition represents signals as the long-term statistical distribution of their local spectral features. This approach has proved nearly optimal for simulating the auditory perception of natural and human environments (or soundscapes), and is also the most predominent paradigm to extract high-level descriptions from music signals. However, recent studies show that, contrary to its application to soundscape signals, BOF only provides limited performance when applied to polyphonic music signals. This paper proposes to explicitly examine the difference between urban soundscapes and polyphonic music with respect to their modeling with the BOF approach. First, the application of the same measure of acoustic similarity on both soundscape and music data sets confirms that the BOF approach can model soundscapes to near-perfect precision, and exhibits none of the limitations observed in the music data set. Second, the modification of this measure by two custom homogeneity transforms reveals critical differences in the temporal and statistical structure of the typical frame distribution of each type of signal. Such differences may explain the uneven performance of BOF algorithms on soundscapes and music signals, and suggest that their human perception rely on cognitive processes of a different nature.

  4. Digital signal processing with Matlab examples

    CERN Document Server

    Giron-Sierra, Jose Maria

    2017-01-01

    This is the first volume in a trilogy on modern Signal Processing. The three books provide a concise exposition of signal processing topics, and a guide to support individual practical exploration based on MATLAB programs. This book includes MATLAB codes to illustrate each of the main steps of the theory, offering a self-contained guide suitable for independent study. The code is embedded in the text, helping readers to put into practice the ideas and methods discussed. The book is divided into three parts, the first of which introduces readers to periodic and non-periodic signals. The second part is devoted to filtering, which is an important and commonly used application. The third part addresses more advanced topics, including the analysis of real-world non-stationary signals and data, e.g. structural fatigue, earthquakes, electro-encephalograms, birdsong, etc. The book’s last chapter focuses on modulation, an example of the intentional use of non-stationary signals.

  5. Virtual environment display for a 3D audio room simulation

    Science.gov (United States)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  6. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  7. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  8. Digital signal processing for NDT

    International Nuclear Information System (INIS)

    Georgel, B.

    1994-01-01

    NDT begins to adapt and use the most recent developments of digital signal and image processing. We briefly sum up the main characteristics of NDT situations (particularly noise and inverse problem formulation) and comment on techniques already used or just emerging (SAFT, split spectrum, adaptive learning network, noise reference filtering, stochastic models, neural networks). This survey is focused on ultrasonics, eddy currents and X-ray radiography. The final objective of end users (availability of automatic diagnosis systems) cannot be achieved only by signal processing algorithms. A close cooperation with other techniques such as artificial intelligence has therefore to be implemented. (author). 20 refs

  9. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  10. Audio-tactile integration and the influence of musical training.

    Science.gov (United States)

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  11. Audio-tactile integration and the influence of musical training.

    Directory of Open Access Journals (Sweden)

    Anja Kuchenbuch

    Full Text Available Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  12. Dynamically-Loaded Hardware Libraries (HLL) Technology for Audio Applications

    DEFF Research Database (Denmark)

    Esposito, A.; Lomuscio, A.; Nunzio, L. Di

    2016-01-01

    In this work, we apply hardware acceleration to embedded systems running audio applications. We present a new framework, Dynamically-Loaded Hardware Libraries or HLL, to dynamically load hardware libraries on reconfigurable platforms (FPGAs). Provided a library of application-specific processors......, we load on-the-fly the specific processor in the FPGA, and we transfer the execution from the CPU to the FPGA-based accelerator. The proposed architecture provides excellent flexibility with respect to the different audio applications implemented, high quality audio, and an energy efficient solution....

  13. A review of lossless audio compression standards and algorithms

    Science.gov (United States)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  14. Effect of audio in-vehicle red light-running warning message on driving behavior based on a driving simulator experiment.

    Science.gov (United States)

    Yan, Xuedong; Liu, Yang; Xu, Yongcun

    2015-01-01

    Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.

  15. Signal processing methods for MFE plasma diagnostics

    International Nuclear Information System (INIS)

    Candy, J.V.; Casper, T.; Kane, R.

    1985-02-01

    The application of various signal processing methods to extract energy storage information from plasma diamagnetism sensors occurring during physics experiments on the Tandom Mirror Experiment-Upgrade (TMX-U) is discussed. We show how these processing techniques can be used to decrease the uncertainty in the corresponding sensor measurements. The algorithms suggested are implemented using SIG, an interactive signal processing package developed at LLNL

  16. Audio Source Separation in Reverberant Environments Using β-Divergence-Based Nonnegative Factorization

    DEFF Research Database (Denmark)

    Fakhry, Mahmoud; Svaizer, Piergiorgio; Omologo, Maurizio

    2017-01-01

    -maximization algorithm and used to separate the signals by means of multichannel Wiener filtering. We propose to estimate these parameters by applying nonnegative factorization based on prior information on source variances. In the nonnegative factorization, spectral basis matrices can be defined as the prior...... information. The matrices can be either extracted or indirectly made available through a redundant library that is trained in advance. In a separate step, applying nonnegative tensor factorization, two algorithms are proposed in order to either extract or detect the basis matrices that best represent......In Gaussian model-based multichannel audio source separation, the likelihood of observed mixtures of source signals is parametrized by source spectral variances and by associated spatial covariance matrices. These parameters are estimated by maximizing the likelihood through an expectation...

  17. Class D audio amplifiers for high voltage capacitive transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis

    of high volume, weight, and cost. High efficient class D amplifiers are now widely available offering power densities, that their linear counterparts can not match. Unlike the technology of audio amplifiers, the loudspeaker is still based on the traditional electrodynamic transducer invented by C.W. Rice......Audio reproduction systems contains two key components, the amplifier and the loudspeaker. In the last 20 – 30 years the technology of audio amplifiers have performed a fundamental shift of paradigm. Class D audio amplifiers have replaced the linear amplifiers, suffering from the well-known issues...... with the low level of acoustical output power and complex amplifier requirements, have limited the commercial success of the technology. Horn or compression drivers are typically favoured, when high acoustic output power is required, this is however at the expense of significant distortion combined...

  18. All-optical signal processing of OTDM and OFDM signals based on time-domain Optical Fourier Transformation

    DEFF Research Database (Denmark)

    Clausen, Anders; Guan, Pengyu; Mulvad, Hans Christian Hansen

    2014-01-01

    All-optical time-domain Optical Fourier Transformation utilised for signal processing of ultra-high-speed OTDM signals and OFDM signals will be presented.......All-optical time-domain Optical Fourier Transformation utilised for signal processing of ultra-high-speed OTDM signals and OFDM signals will be presented....

  19. Can audio recording improve patients' recall of outpatient consultations?

    DEFF Research Database (Denmark)

    Wolderslund, Maiken; Kofoed, Poul-Erik; Axboe, Mette

    Introduction In order to give patients possibility to listen to their consultation again, we have designed a system which gives the patients access to digital audio recordings of their consultations. An Interactive Voice Response platform enables the audio recording and gives the patients access...... and those who have not (control).The audio recordings and the interviews are coded according to six themes: Test results, Treatment, Risks, Future tests, Advice and Plan. Afterwards the extent of patients recall is assessed by comparing the accuracy of the patient’s statements (interview...

  20. Financial signal processing and machine learning

    CERN Document Server

    Kulkarni,Sanjeev R; Dmitry M. Malioutov

    2016-01-01

    The modern financial industry has been required to deal with large and diverse portfolios in a variety of asset classes often with limited market data available. Financial Signal Processing and Machine Learning unifies a number of recent advances made in signal processing and machine learning for the design and management of investment portfolios and financial engineering. This book bridges the gap between these disciplines, offering the latest information on key topics including characterizing statistical dependence and correlation in high dimensions, constructing effective and robust risk measures, and their use in portfolio optimization and rebalancing. The book focuses on signal processing approaches to model return, momentum, and mean reversion, addressing theoretical and implementation aspects. It highlights the connections between portfolio theory, sparse learning and compressed sensing, sparse eigen-portfolios, robust optimization, non-Gaussian data-driven risk measures, graphical models, causal analy...

  1. Signals and Systems in Biomedical Engineering Signal Processing and Physiological Systems Modeling

    CERN Document Server

    Devasahayam, Suresh R

    2013-01-01

    The use of digital signal processing is ubiquitous in the field of physiology and biomedical engineering. The application of such mathematical and computational tools requires a formal or explicit understanding of physiology. Formal models and analytical techniques are interlinked in physiology as in any other field. This book takes a unitary approach to physiological systems, beginning with signal measurement and acquisition, followed by signal processing, linear systems modelling, and computer simulations. The signal processing techniques range across filtering, spectral analysis and wavelet analysis. Emphasis is placed on fundamental understanding of the concepts as well as solving numerical problems. Graphs and analogies are used extensively to supplement the mathematics. Detailed models of nerve and muscle at the cellular and systemic levels provide examples for the mathematical methods and computer simulations. Several of the models are sufficiently sophisticated to be of value in understanding real wor...

  2. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  3. An Interactive Concert Program Based on Infrared Watermark and Audio Synthesis

    Science.gov (United States)

    Wang, Hsi-Chun; Lee, Wen-Pin Hope; Liang, Feng-Ju

    The objective of this research is to propose a video/audio system which allows the user to listen the typical music notes in the concert program under infrared detection. The system synthesizes audio with different pitches and tempi in accordance with the encoded data in a 2-D barcode embedded in the infrared watermark. The digital halftoning technique has been used to fabricate the infrared watermark composed of halftone dots by both amplitude modulation (AM) and frequency modulation (FM). The results show that this interactive system successfully recognizes the barcode and synthesizes audio under infrared detection of a concert program which is also valid for human observation of the contents. This interactive video/audio system has greatly expanded the capability of the printout paper to audio display and also has many potential value-added applications.

  4. Audio Networking in the Music Industry

    Directory of Open Access Journals (Sweden)

    Glebs Kuzmics

    2018-01-01

    Full Text Available This paper surveys the rôle of computer networking technologies in the music industry. A comparison of their relevant technologies, their defining advantages and disadvantages; analyses and discussion of the situation in the market of network enabled audio products followed by a discussion of different devices are presented. The idea of replacing a proprietary solution with open-source and freeware software programs has been chosen as the fundamental concept of this research. The technologies covered include: native IEEE AVnu Alliance Audio Video Bridging (AVB, CobraNet®, Audinate Dante™ and Harman BLU Link.

  5. Frequency Hopping Method for Audio Watermarking

    Directory of Open Access Journals (Sweden)

    A. Anastasijević

    2012-11-01

    Full Text Available This paper evaluates the degradation of audio content for a perceptible removable watermark. Two different approaches to embedding the watermark in the spectral domain were investigated. The frequencies for watermark embedding are chosen according to a pseudorandom sequence making the methods robust. Consequentially, the lower quality audio can be used for promotional purposes. For a fee, the watermark can be removed with a secret watermarking key. Objective and subjective testing was conducted in order to measure degradation level for the watermarked music samples and to examine residual distortion for different parameters of the watermarking algorithm and different music genres.

  6. Four-quadrant flyback converter for direct audio power amplification

    DEFF Research Database (Denmark)

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    This paper presents a bidirectional, four-quadrant flyback converter for use in direct audio power amplification. When compared to the standard Class-D switching audio power amplifier with a separate power supply, the proposed four-quadrant flyback converter provides simple solution with better...

  7. Audio Technology and Mobile Human Computer Interaction

    DEFF Research Database (Denmark)

    Chamberlain, Alan; Bødker, Mads; Hazzard, Adrian

    2017-01-01

    Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design...... and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities....

  8. Television picture signal processing

    NARCIS (Netherlands)

    1998-01-01

    Field or frame memories are often used in television receivers for video signal processing functions, such as noise reduction and/or flicker reduction. Television receivers also have graphic features such as teletext, menu-driven control systems, multilingual subtitling, an electronic TV-Guide, etc.

  9. Wavelets and multiscale signal processing

    CERN Document Server

    Cohen, Albert

    1995-01-01

    Since their appearance in mid-1980s, wavelets and, more generally, multiscale methods have become powerful tools in mathematical analysis and in applications to numerical analysis and signal processing. This book is based on "Ondelettes et Traitement Numerique du Signal" by Albert Cohen. It has been translated from French by Robert D. Ryan and extensively updated by both Cohen and Ryan. It studies the existing relations between filter banks and wavelet decompositions and shows how these relations can be exploited in the context of digital signal processing. Throughout, the book concentrates on the fundamentals. It begins with a chapter on the concept of multiresolution analysis, which contains complete proofs of the basic results. The description of filter banks that are related to wavelet bases is elaborated in both the orthogonal case (Chapter 2), and in the biorthogonal case (Chapter 4). The regularity of wavelets, how this is related to the properties of the filters and the importance of regularity for t...

  10. Non-commutative tomography and signal processing

    International Nuclear Information System (INIS)

    Mendes, R Vilela

    2015-01-01

    Non-commutative tomography is a technique originally developed and extensively used by Professors M A Man’ko and V I Man’ko in quantum mechanics. Because signal processing deals with operators that, in general, do not commute with time, the same technique has a natural extension to this domain. Here, a review is presented of the theory and some applications of non-commutative tomography for time series as well as some new results on signal processing on graphs. (paper)

  11. The Signal Validation method of Digital Process Instrumentation System on signal conditioner for SMART

    International Nuclear Information System (INIS)

    Moon, Hee Gun; Park, Sang Min; Kim, Jung Seon; Shon, Chang Ho; Park, Heui Youn; Koo, In Soo

    2005-01-01

    The function of PIS(Process Instrumentation System) for SMART is to acquire the process data from sensor or transmitter. The PIS consists of signal conditioner, A/D converter, DSP(Digital Signal Process) and NIC(Network Interface Card). So, It is fully digital system after A/D converter. The PI cabinet and PDAS(Plant Data Acquisition System) in commercial plant is responsible for data acquisition of the sensor or transmitter include RTD, TC, level, flow, pressure and so on. The PDAS has the software that processes each sensor data and PI cabinet has the signal conditioner, which is need for maintenance and test. The signal conditioner has the potentiometer to adjust the span and zero for test and maintenance. The PIS of SMART also has the signal conditioner which has the span and zero adjust same as the commercial plant because the signal conditioner perform the signal condition for AD converter such as 0∼10Vdc. But, To adjust span and zero is manual test and calibration. So, This paper presents the method of signal validation and calibration, which is used by digital feature in SMART. There are I/E(current to voltage), R/E(resistor to voltage), F/E(frequency to voltage), V/V(voltage to voltage). Etc. In this paper show only the signal validation and calibration about I/E converter that convert level, pressure, flow such as 4∼20mA into signal for AD conversion such as 0∼10Vdc

  12. Processing Electromyographic Signals to Recognize Words

    Science.gov (United States)

    Jorgensen, C. C.; Lee, D. D.

    2009-01-01

    A recently invented speech-recognition method applies to words that are articulated by means of the tongue and throat muscles but are otherwise not voiced or, at most, are spoken sotto voce. This method could satisfy a need for speech recognition under circumstances in which normal audible speech is difficult, poses a hazard, is disturbing to listeners, or compromises privacy. The method could also be used to augment traditional speech recognition by providing an additional source of information about articulator activity. The method can be characterized as intermediate between (1) conventional speech recognition through processing of voice sounds and (2) a method, not yet developed, of processing electroencephalographic signals to extract unspoken words directly from thoughts. This method involves computational processing of digitized electromyographic (EMG) signals from muscle innervation acquired by surface electrodes under a subject's chin near the tongue and on the side of the subject s throat near the larynx. After preprocessing, digitization, and feature extraction, EMG signals are processed by a neural-network pattern classifier, implemented in software, that performs the bulk of the recognition task as described.

  13. Unsupervised topic modelling on South African parliament audio data

    CSIR Research Space (South Africa)

    Kleynhans, N

    2014-11-01

    Full Text Available Using a speech recognition system to convert spoken audio to text can enable the structuring of large collections of spoken audio data. A convenient means to summarise or cluster spoken data is to identify the topic under discussion. There are many...

  14. Classifying laughter and speech using audio-visual feature prediction

    NARCIS (Netherlands)

    Petridis, Stavros; Asghar, Ali; Pantic, Maja

    2010-01-01

    In this study, a system that discriminates laughter from speech by modelling the relationship between audio and visual features is presented. The underlying assumption is that this relationship is different between speech and laughter. Neural networks are trained which learn the audio-to-visual and

  15. Sistema de adquisición y procesamiento de audio

    OpenAIRE

    Pérez Segurado, Rubén

    2015-01-01

    El objetivo de este proyecto es el diseño y la implementación de una plataforma para un sistema de procesamiento de audio. El sistema recibirá una señal de audio analógica desde una fuente de audio, permitirá realizar un tratamiento digital de dicha señal y generará una señal procesada que se enviará a unos altavoces externos. Para la realización del sistema de procesamiento se empleará: - Un dispositivo FPGA de Lattice, modelo MachX02-7000-HE, en la cual estarán todas la...

  16. ENERGY STAR Certified Audio Video

    Data.gov (United States)

    U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of...

  17. Digital signal processing application in nuclear spectroscopy

    Directory of Open Access Journals (Sweden)

    O. V. Zeynalova

    2009-06-01

    Full Text Available Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter to the final signal-to-noise ratio of the spectrometer considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionisation chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore, the influence of the pile-up elimination scheme on the final resolution of the spectrometer investigated in terms of the distance between piled-up pulses. The efficiency of developed algorithms compared with other signal processing schemes published in literature.

  18. Signal processing for smart cards

    Science.gov (United States)

    Quisquater, Jean-Jacques; Samyde, David

    2003-06-01

    In 1998, Paul Kocher showed that when a smart card computes cryptographic algorithms, for signatures or encryption, its consumption or its radiations leak information. The keys or the secrets hidden in the card can then be recovered using a differential measurement based on the intercorrelation function. A lot of silicon manufacturers use desynchronization countermeasures to defeat power analysis. In this article we detail a new resynchronization technic. This method can be used to facilitate the use of a neural network to do the code recognition. It becomes possible to reverse engineer a software code automatically. Using data and clock separation methods, we show how to optimize the synchronization using signal processing. Then we compare these methods with watermarking methods for 1D and 2D signal. The very last watermarking detection improvements can be applied to signal processing for smart cards with very few modifications. Bayesian processing is one of the best ways to do Differential Power Analysis, and it is possible to extract a PIN code from a smart card in very few samples. So this article shows the need to continue to set up effective countermeasures for cryptographic processors. Although the idea to use advanced signal processing operators has been commonly known for a long time, no publication explains that results can be obtained. The main idea of differential measurement is to use the cross-correlation of two random variables and to repeat consumption measurements on the processor to be analyzed. We use two processors clocked at the same external frequency and computing the same data. The applications of our design are numerous. Two measurements provide the inputs of a central operator. With the most accurate operator we can improve the signal noise ratio, re-synchronize the acquisition clock with the internal one, or remove jitter. The analysis based on consumption or electromagnetic measurements can be improved using our structure. At first sight

  19. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  20. Multi Carrier Modulation Audio Power Amplifier with Programmable Logic

    DEFF Research Database (Denmark)

    Christiansen, Theis; Andersen, Toke Meyer; Knott, Arnold

    2009-01-01

    While switch-mode audio power amplifiers allow compact implementations and high output power levels due to their high power efficiency, they are very well known for creating electromagnetic interference (EMI) with other electronic equipment. To lower the EMI of switch-mode (class D) audio power a...

  1. All-optical signal processing data communication and storage applications

    CERN Document Server

    Eggleton, Benjamin

    2015-01-01

    This book provides a comprehensive review of the state-of-the art of optical signal processing technologies and devices. It presents breakthrough solutions for enabling a pervasive use of optics in data communication and signal storage applications. It presents presents optical signal processing as solution to overcome the capacity crunch in communication networks. The book content ranges from the development of innovative materials and devices, such as graphene and slow light structures, to the use of nonlinear optics for secure quantum information processing and overcoming the classical Shannon limit on channel capacity and microwave signal processing. Although it holds the promise for a substantial speed improvement, today’s communication infrastructure optics remains largely confined to the signal transport layer, as it lags behind electronics as far as signal processing is concerned. This situation will change in the near future as the tremendous growth of data traffic requires energy efficient and ful...

  2. Advances in audio watermarking based on singular value decomposition

    CERN Document Server

    Dhar, Pranab Kumar

    2015-01-01

    This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications.   ·         Features new methods of audio watermarking for copyright protection and ownership protection ·         Outl...

  3. Perancangan Sistem Audio Mobil Berbasiskan Sistem Pakar dan Web

    Directory of Open Access Journals (Sweden)

    Djunaidi Santoso

    2011-12-01

    Full Text Available Designing car audio that fits user’s needs is a fun activity. However, the design often consumes more time and costly since it should be consulted to the experts several times. For easy access to information in designing a car audio system as well as error prevention, an car audio system based on expert system and web is designed for those who do not have sufficient time and expense to consult directly to experts. This system consists of tutorial modules designed using the HyperText Preprocessor (PHP and MySQL as database. This car audio system design is evaluated uses black box testing method which focuses on the functional needs of the application. Tests are performed by providing inputs and produce outputs corresponding to the function of each module. The test results prove the correspondence between input and output, which means that the program meet the initial goals of the design. 

  4. Digital signal processing theory and practice

    CERN Document Server

    Rao, K Deergha

    2018-01-01

    The book provides a comprehensive exposition of all major topics in digital signal processing (DSP). With numerous illustrative examples for easy understanding of the topics, it also includes MATLAB-based examples with codes in order to encourage the readers to become more confident of the fundamentals and to gain insights into DSP. Further, it presents real-world signal processing design problems using MATLAB and programmable DSP processors. In addition to problems that require analytical solutions, it discusses problems that require solutions using MATLAB at the end of each chapter. Divided into 13 chapters, it addresses many emerging topics, which are not typically found in advanced texts on DSP. It includes a chapter on adaptive digital filters used in the signal processing problems for faster acceptable results in the presence of changing environments and changing system requirements. Moreover, it offers an overview of wavelets, enabling readers to easily understand the basics and applications of this po...

  5. BAT: An open-source, web-based audio events annotation tool

    OpenAIRE

    Blai Meléndez-Catalan, Emilio Molina, Emilia Gómez

    2017-01-01

    In this paper we present BAT (BMAT Annotation Tool), an open-source, web-based tool for the manual annotation of events in audio recordings developed at BMAT (Barcelona Music and Audio Technologies). The main feature of the tool is that it provides an easy way to annotate the salience of simultaneous sound sources. Additionally, it allows to define multiple ontologies to adapt to multiple tasks and offers the possibility to cross-annotate audio data. Moreover, it is easy to install and deploy...

  6. On the Use of Memory Models in Audio Features

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual spectral flux...

  7. Audio Teleconferencing: Low Cost Technology for External Studies Networking.

    Science.gov (United States)

    Robertson, Bill

    1987-01-01

    This discussion of the benefits of audio teleconferencing for distance education programs and for business and government applications focuses on the recent experience of Canadian educational users. Four successful operating models and their costs are reviewed, and it is concluded that audio teleconferencing is cost efficient and educationally…

  8. Automatic Organisation and Quality Analysis of User-Generated Content with Audio Fingerprinting

    OpenAIRE

    Cavaco, Sofia; Magalhaes, Joao; Mordido, Gonçalo

    2018-01-01

    The increase of the quantity of user-generated content experienced in social media has boosted the importance of analysing and organising the content by its quality. Here, we propose a method that uses audio fingerprinting to organise and infer the quality of user-generated audio content. The proposed method detects the overlapping segments between different audio clips to organise and cluster the data according to events, and to infer the audio quality of the samples. A test setup with conce...

  9. Grating geophone signal processing based on wavelet transform

    Science.gov (United States)

    Li, Shuqing; Zhang, Huan; Tao, Zhifei

    2008-12-01

    Grating digital geophone is designed based on grating measurement technique benefiting averaging-error effect and wide dynamic range to improve weak signal detected precision. This paper introduced the principle of grating digital geophone and its post signal processing system. The signal acquisition circuit use Atmega 32 chip as core part and display the waveform on the Labwindows through the RS232 data link. Wavelet transform is adopted this paper to filter the grating digital geophone' output signal since the signal is unstable. This data processing method is compared with the FIR filter that widespread use in current domestic. The result indicates that the wavelet algorithm has more advantages and the SNR of seismic signal improve obviously.

  10. An introduction to digital signal processing

    CERN Document Server

    Karl, John H

    1989-01-01

    An Introduction to Digital Signal Processing is written for those who need to understand and use digital signal processing and yet do not wish to wade through a multi-semester course sequence. Using only calculus-level mathematics, this book progresses rapidly through the fundamentals to advanced topics such as iterative least squares design of IIR filters, inverse filters, power spectral estimation, and multidimensional applications--all in one concise volume.This book emphasizes both the fundamental principles and their modern computer implementation. It presents and demonstrates how si

  11. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  12. The Effect of Audio and Animation in Multimedia Instruction

    Science.gov (United States)

    Koroghlanian, Carol; Klein, James D.

    2004-01-01

    This study investigated the effects of audio, animation, and spatial ability in a multimedia computer program for high school biology. Participants completed a multimedia program that presented content by way of text or audio with lean text. In addition, several instructional sequences were presented either with static illustrations or animations.…

  13. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  14. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  15. Effect of Audio Coaching on Correlation of Abdominal Displacement With Lung Tumor Motion

    International Nuclear Information System (INIS)

    Nakamura, Mitsuhiro; Narita, Yuichiro; Matsuo, Yukinori; Narabayashi, Masaru; Nakata, Manabu; Sawada, Akira; Mizowaki, Takashi; Nagata, Yasushi; Hiraoka, Masahiro

    2009-01-01

    Purpose: To assess the effect of audio coaching on the time-dependent behavior of the correlation between abdominal motion and lung tumor motion and the corresponding lung tumor position mismatches. Methods and Materials: Six patients who had a lung tumor with a motion range >8 mm were enrolled in the present study. Breathing-synchronized fluoroscopy was performed initially without audio coaching, followed by fluoroscopy with recorded audio coaching for multiple days. Two different measurements, anteroposterior abdominal displacement using the real-time positioning management system and superoinferior (SI) lung tumor motion by X-ray fluoroscopy, were performed simultaneously. Their sequential images were recorded using one display system. The lung tumor position was automatically detected with a template matching technique. The relationship between the abdominal and lung tumor motion was analyzed with and without audio coaching. Results: The mean SI tumor displacement was 10.4 mm without audio coaching and increased to 23.0 mm with audio coaching (p < .01). The correlation coefficients ranged from 0.89 to 0.97 with free breathing. Applying audio coaching, the correlation coefficients improved significantly (range, 0.93-0.99; p < .01), and the SI lung tumor position mismatches became larger in 75% of all sessions. Conclusion: Audio coaching served to increase the degree of correlation and make it more reproducible. In addition, the phase shifts between tumor motion and abdominal displacement were improved; however, all patients breathed more deeply, and the SI lung tumor position mismatches became slightly larger with audio coaching than without audio coaching.

  16. Signal processing techniques for sodium boiling noise detection

    International Nuclear Information System (INIS)

    1989-05-01

    At the Specialists' Meeting on Sodium Boiling Detection organized by the International Working Group on Fast Reactors (IWGFR) of the International Atomic Energy Agency at Chester in the United Kingdom in 1981 various methods of detecting sodium boiling were reported. But, it was not possible to make a comparative assessment of these methods because the signal condition in each experiment was different from others. That is why participants of this meeting recommended that a benchmark test should be carried out in order to evaluate and compare signal processing methods for boiling detection. Organization of the Co-ordinated Research Programme (CRP) on signal processing techniques for sodium boiling noise detection was also recommended at the 16th meeting of the IWGFR. The CRP on Signal Processing Techniques for Sodium Boiling Noise Detection was set up in 1984. Eight laboratories from six countries have agreed to participate in this CRP. The overall objective of the programme was the development of reliable on-line signal processing techniques which could be used for the detection of sodium boiling in an LMFBR core. During the first stage of the programme a number of existing processing techniques used by different countries have been compared and evaluated. In the course of further work, an algorithm for implementation of this sodium boiling detection system in the nuclear reactor will be developed. It was also considered that the acoustic signal processing techniques developed for boiling detection could well make a useful contribution to other acoustic applications in the reactor. This publication consists of two parts. Part I is the final report of the co-ordinated research programme on signal processing techniques for sodium boiling noise detection. Part II contains two introductory papers and 20 papers presented at four research co-ordination meetings since 1985. A separate abstract was prepared for each of these 22 papers. Refs, figs and tabs

  17. Four-quadrant flyback converter for direct audio power amplification

    OpenAIRE

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    This paper presents a bidirectional, four-quadrant flyback converter for use in direct audio power amplification. When compared to the standard Class-D switching audio power amplifier with a separate power supply, the proposed four-quadrant flyback converter provides simple solution with better efficiency, higher level of integration and lower component count.

  18. Self-oscillating modulators for direct energy conversion audio power amplifiers

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    Direct energy conversion audio power amplifier represents total integration of switching-mode power supply and Class D audio power amplifier into one compact stage, achieving high efficiency, high level of integration, low component count and eventually low cost. This paper presents how self-oscillating modulators can be used with the direct switching-mode audio power amplifier to improve its performance by providing fast hysteretic control with high power supply rejection ratio, open-loop stability and high bandwidth. Its operation is thoroughly analyzed and simulated waveforms of a prototype amplifier are presented. (au)

  19. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  20. Local Control of Audio Environment: A Review of Methods and Applications

    Directory of Open Access Journals (Sweden)

    Jussi Kuutti

    2014-02-01

    Full Text Available The concept of a local audio environment is to have sound playback locally restricted such that, ideally, adjacent regions of an indoor or outdoor space could exhibit their own individual audio content without interfering with each other. This would enable people to listen to their content of choice without disturbing others next to them, yet, without any headphones to block conversation. In practice, perfect sound containment in free air cannot be attained, but a local audio environment can still be satisfactorily approximated using directional speakers. Directional speakers may be based on regular audible frequencies or they may employ modulated ultrasound. Planar, parabolic, and array form factors are commonly used. The directivity of a speaker improves as its surface area and sound frequency increases, making these the main design factors for directional audio systems. Even directional speakers radiate some sound outside the main beam, and sound can also reflect from objects. Therefore, directional speaker systems perform best when there is enough ambient noise to mask the leaking sound. Possible areas of application for local audio include information and advertisement audio feed in commercial facilities, guiding and narration in museums and exhibitions, office space personalization, control room messaging, rehabilitation environments, and entertainment audio systems.

  1. High-speed optical coherence tomography signal processing on GPU

    International Nuclear Information System (INIS)

    Li Xiqi; Shi Guohua; Zhang Yudong

    2011-01-01

    The signal processing speed of spectral domain optical coherence tomography (SD-OCT) has become a bottleneck in many medical applications. Recently, a time-domain interpolation method was proposed. This method not only gets a better signal-to noise ratio (SNR) but also gets a faster signal processing time for the SD-OCT than the widely used zero-padding interpolation method. Furthermore, the re-sampled data is obtained by convoluting the acquired data and the coefficients in time domain. Thus, a lot of interpolations can be performed concurrently. So, this interpolation method is suitable for parallel computing. An ultra-high optical coherence tomography signal processing can be realized by using graphics processing unit (GPU) with computer unified device architecture (CUDA). This paper will introduce the signal processing steps of SD-OCT on GPU. An experiment is performed to acquire a frame SD-OCT data (400A-linesx2048 pixel per A-line) and real-time processed the data on GPU. The results show that it can be finished in 6.208 milliseconds, which is 37 times faster than that on Central Processing Unit (CPU).

  2. Analogue Signal Processing: Collected Papers 1994-95

    DEFF Research Database (Denmark)

    1996-01-01

    This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of Electronics Institute, Technical University of Denmark, in 1994 and 1995.......This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of Electronics Institute, Technical University of Denmark, in 1994 and 1995....

  3. Hot topics: Signal processing in acoustics

    Science.gov (United States)

    Gaumond, Charles F.

    2005-09-01

    Signal processing in acoustics is a multidisciplinary group of people that work in many areas of acoustics. We have chosen two areas that have shown exciting new applications of signal processing to acoustics or have shown exciting and important results from the use of signal processing. In this session, two hot topics are shown: the use of noiselike acoustic fields to determine sound propagation structure and the use of localization to determine animal behaviors. The first topic shows the application of correlation on geo-acoustic fields to determine the Greens function for propagation through the Earth. These results can then be further used to solve geo-acoustic inverse problems. The first topic also shows the application of correlation using oceanic noise fields to determine the Greens function through the ocean. These results also have utility for oceanic inverse problems. The second topic shows exciting results from the detection, localization, and tracking of marine mammals by two different groups. Results from detection and localization of bullfrogs are shown, too. Each of these studies contributed to the knowledge of animal behavior. [Work supported by ONR.

  4. Balancing Audio

    DEFF Research Database (Denmark)

    Walther-Hansen, Mads

    2016-01-01

    is not thoroughly understood. In this paper I treat balance as a metaphor that we use to reason about several different actions in music production, such as adjusting levels, editing the frequency spectrum or the spatiality of the recording. This study is based on an exploration of a linguistic corpus of sound......This paper explores the concept of balance in music production and examines the role of conceptual metaphors in reasoning about audio editing. Balance may be the most central concept in record production, however, the way we cognitively understand and respond meaningfully to a mix requiring balance...

  5. Four-quadrant flyback converter for direct audio power amplification

    Energy Technology Data Exchange (ETDEWEB)

    Ljusev, P.; Andersen, Michael A.E.

    2005-07-01

    This paper presents a bidirectional, four-quadrant yback converter for use in direct audio power amplication. When compared to the standard Class-D switching-mode audio power amplier with separate power supply, the proposed four-quadrant flyback converter provides simple and compact solution with high efciency, higher level of integration, lower component count, less board space and eventually lower cost. Both peak and average current-mode control for use with 4Q flyback power converters are described and compared. Integrated magnetics is presented which simplies the construction of the auxiliary power supplies for control biasing and isolated gate drives. The feasibility of the approach is proven on audio power amplier prototype for subwoofer applications. (au)

  6. Proceedings of IEEE Machine Learning for Signal Processing Workshop XV

    DEFF Research Database (Denmark)

    Larsen, Jan

    These proceedings contains refereed papers presented at the Fifteenth IEEE Workshop on Machine Learning for Signal Processing (MLSP’2005), held in Mystic, Connecticut, USA, September 28-30, 2005. This is a continuation of the IEEE Workshops on Neural Networks for Signal Processing (NNSP) organized...... by the NNSP Technical Committee of the IEEE Signal Processing Society. The name of the Technical Committee, hence of the Workshop, was changed to Machine Learning for Signal Processing in September 2003 to better reflect the areas represented by the Technical Committee. The conference is organized...... by the Machine Learning for Signal Processing Technical Committee with sponsorship of the IEEE Signal Processing Society. Following the practice started two years ago, the bound volume of the proceedings is going to be published by IEEE following the Workshop, and we are pleased to offer to conference attendees...

  7. Animation, audio, and spatial ability: Optimizing multimedia for scientific explanations

    Science.gov (United States)

    Koroghlanian, Carol May

    This study investigated the effects of audio, animation and spatial ability in a computer based instructional program for biology. The program presented instructional material via text or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a biology course were blocked by spatial ability and randomly assigned to one of four treatments (Text-Static Illustration Audio-Static Illustration, Text-Animation, Audio-Animation). The study examined the effects of instructional mode (Text vs. Audio), illustration mode (Static Illustration vs. Animation) and spatial ability (Low vs. High) on practice and posttest achievement, attitude and time. Results for practice achievement indicated that high spatial ability participants achieved more than low spatial ability participants. Similar results for posttest achievement and spatial ability were not found. Participants in the Static Illustration treatments achieved the same as participants in the Animation treatments on both the practice and posttest. Likewise, participants in the Text treatments achieved the same as participants in the Audio treatments on both the practice and posttest. In terms of attitude, participants responded favorably to the computer based instructional program. They found the program interesting, felt the static illustrations or animations made the explanations easier to understand and concentrated on learning the material. Furthermore, participants in the Animation treatments felt the information was easier to understand than participants in the Static Illustration treatments. However, no difference for any attitude item was found for participants in the Text as compared to those in the Audio treatments. Significant differences were found by Spatial Ability for three attitude items concerning concentration and interest. In all three items, the low spatial ability participants responded more positively

  8. Parametric Audio Based Decoder and Music Synthesizer for Mobile Applications

    NARCIS (Netherlands)

    Oomen, A.W.J.; Szczerba, M.Z.; Therssen, D.

    2011-01-01

    This paper reviews parametric audio coders and discusses novel technologies introduced in a low-complexity, low-power consumption audiodecoder and music synthesizer platform developed by the authors. Thedecoder uses parametric coding scheme based on the MPEG-4 Parametric Audio standard. In order to

  9. Audio Feedback -- Better Feedback?

    Science.gov (United States)

    Voelkel, Susanne; Mello, Luciane V.

    2014-01-01

    National Student Survey (NSS) results show that many students are dissatisfied with the amount and quality of feedback they get for their work. This study reports on two case studies in which we tried to address these issues by introducing audio feedback to one undergraduate (UG) and one postgraduate (PG) class, respectively. In case study one…

  10. Detecting Parkinson's disease from sustained phonation and speech signals.

    Directory of Open Access Journals (Sweden)

    Evaldas Vaiciukynas

    Full Text Available This study investigates signals from sustained phonation and text-dependent speech modalities for Parkinson's disease screening. Phonation corresponds to the vowel /a/ voicing task and speech to the pronunciation of a short sentence in Lithuanian language. Signals were recorded through two channels simultaneously, namely, acoustic cardioid (AC and smart phone (SP microphones. Additional modalities were obtained by splitting speech recording into voiced and unvoiced parts. Information in each modality is summarized by 18 well-known audio feature sets. Random forest (RF is used as a machine learning algorithm, both for individual feature sets and for decision-level fusion. Detection performance is measured by the out-of-bag equal error rate (EER and the cost of log-likelihood-ratio. Essentia audio feature set was the best using the AC speech modality and YAAFE audio feature set was the best using the SP unvoiced modality, achieving EER of 20.30% and 25.57%, respectively. Fusion of all feature sets and modalities resulted in EER of 19.27% for the AC and 23.00% for the SP channel. Non-linear projection of a RF-based proximity matrix into the 2D space enriched medical decision support by visualization.

  11. Signal Processing Methods Monitor Cranial Pressure

    Science.gov (United States)

    2010-01-01

    Dr. Norden Huang, of Goddard Space Flight Center, invented a set of algorithms (called the Hilbert-Huang Transform, or HHT) for analyzing nonlinear and nonstationary signals that developed into a user-friendly signal processing technology for analyzing time-varying processes. At an auction managed by Ocean Tomo Federal Services LLC, licenses of 10 U.S. patents and 1 domestic patent application related to HHT were sold to DynaDx Corporation, of Mountain View, California. DynaDx is now using the licensed NASA technology for medical diagnosis and prediction of brain blood flow-related problems, such as stroke, dementia, and traumatic brain injury.

  12. Using Songs as Audio Materials in Teaching Turkish as a Foreign Language

    Science.gov (United States)

    Keskin, Funda

    2011-01-01

    The use of songs as audio materials in teaching Turkish as foreign language is an important part of language culture and has an important place in culture. Thus, the transfer of cultural aspects accelerates language learning process. In the light of this view, it becomes necessary to transfer cultural aspects into classroom environment in teaching…

  13. MEL-IRIS: An Online Tool for Audio Analysis and Music Indexing

    Directory of Open Access Journals (Sweden)

    Dimitrios Margounakis

    2009-01-01

    Full Text Available Chroma is an important attribute of music and sound, although it has not yet been adequately defined in literature. As such, it can be used for further analysis of sound, resulting in interesting colorful representations that can be used in many tasks: indexing, classification, and retrieval. Especially in Music Information Retrieval (MIR, the visualization of the chromatic analysis can be used for comparison, pattern recognition, melodic sequence prediction, and color-based searching. MEL-IRIS is the tool which has been developed in order to analyze audio files and characterize music based on chroma. The tool implements specially designed algorithms and a unique way of visualization of the results. The tool is network-oriented and can be installed in audio servers, in order to manipulate large music collections. Several samples from world music have been tested and processed, in order to demonstrate the possible uses of such an analysis.

  14. Emotion-based Music Rretrieval on a Well-reduced Audio Feature Space

    DEFF Research Database (Denmark)

    Ruxanda, Maria Magdalena; Chua, Bee Yong; Nanopoulos, Alexandros

    2009-01-01

    -emotion. However, the real-time systems that retrieve music over large music databases, can achieve order of magnitude performance increase, if applying multidimensional indexing over a dimensionally reduced audio feature space. To meet this performance achievement, in this paper, extensive studies are conducted......Music expresses emotion. A number of audio extracted features have influence on the perceived emotional expression of music. These audio features generate a high-dimensional space, on which music similarity retrieval can be performed effectively, with respect to human perception of the music...... on a number of dimensionality reduction algorithms, including both classic and novel approaches. The paper clearly envisages which dimensionality reduction techniques on the considered audio feature space, can preserve in average the accuracy of the emotion-based music retrieval....

  15. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  16. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    Science.gov (United States)

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then

  17. Software for biomedical engineering signal processing laboratory experiments.

    Science.gov (United States)

    Tompkins, Willis J; Wilson, J

    2009-01-01

    In the early 1990's we developed a special computer program called UW DigiScope to provide a mechanism for anyone interested in biomedical digital signal processing to study the field without requiring any other instrument except a personal computer. There are many digital filtering and pattern recognition algorithms used in processing biomedical signals. In general, students have very limited opportunity to have hands-on access to the mechanisms of digital signal processing. In a typical course, the filters are designed non-interactively, which does not provide the student with significant understanding of the design constraints of such filters nor their actual performance characteristics. UW DigiScope 3.0 is the first major update since version 2.0 was released in 1994. This paper provides details on how the new version based on MATLAB! works with signals, including the filter design tool that is the programming interface between UW DigiScope and processing algorithms.

  18. Improving audio chord transcription by exploiting harmonic and metric knowledge

    NARCIS (Netherlands)

    de Haas, W.B.; Rodrigues Magalhães, J.P.; Wiering, F.

    2012-01-01

    We present a new system for chord transcription from polyphonic musical audio that uses domain-specific knowledge about tonal harmony and metrical position to improve chord transcription performance. Low-level pulse and spectral features are extracted from an audio source using the Vamp plugin

  19. Self-oscillating modulators for direct energy conversion audio power amplifiers

    DEFF Research Database (Denmark)

    Ljusev, Petar; Andersen, Michael Andreas E.

    2005-01-01

    Direct energy conversion audio power amplifier represents total integration of switching-mode power supply and Class D audio power amplifier into one compact stage, achieving high efficiency, high level of integration, low component count and eventually low cost. This paper presents how self-oscillating...

  20. Extraction, Mapping, and Evaluation of Expressive Acoustic Features for Adaptive Digital Audio Effects

    DEFF Research Database (Denmark)

    Holfelt, Jonas; Csapo, Gergely; Andersson, Nikolaj Schwab

    2017-01-01

    This paper describes the design and implementation of a real-time adaptive digital audio effect with an emphasis on using expressive audio features that control effect param- eters. Research in adaptive digital audio effects is cov- ered along with studies about expressivity and important...

  1. A Content-Adaptive Analysis and Representation Framework for Audio Event Discovery from "Unscripted" Multimedia

    Science.gov (United States)

    Radhakrishnan, Regunathan; Divakaran, Ajay; Xiong, Ziyou; Otsuka, Isao

    2006-12-01

    We propose a content-adaptive analysis and representation framework to discover events using audio features from "unscripted" multimedia such as sports and surveillance for summarization. The proposed analysis framework performs an inlier/outlier-based temporal segmentation of the content. It is motivated by the observation that "interesting" events in unscripted multimedia occur sparsely in a background of usual or "uninteresting" events. We treat the sequence of low/mid-level features extracted from the audio as a time series and identify subsequences that are outliers. The outlier detection is based on eigenvector analysis of the affinity matrix constructed from statistical models estimated from the subsequences of the time series. We define the confidence measure on each of the detected outliers as the probability that it is an outlier. Then, we establish a relationship between the parameters of the proposed framework and the confidence measure. Furthermore, we use the confidence measure to rank the detected outliers in terms of their departures from the background process. Our experimental results with sequences of low- and mid-level audio features extracted from sports video show that "highlight" events can be extracted effectively as outliers from a background process using the proposed framework. We proceed to show the effectiveness of the proposed framework in bringing out suspicious events from surveillance videos without any a priori knowledge. We show that such temporal segmentation into background and outliers, along with the ranking based on the departure from the background, can be used to generate content summaries of any desired length. Finally, we also show that the proposed framework can be used to systematically select "key audio classes" that are indicative of events of interest in the chosen domain.

  2. IEEE International Workshop on Machine Learning for Signal Processing: Preface

    DEFF Research Database (Denmark)

    Tao, Jianhua

    The 21st IEEE International Workshop on Machine Learning for Signal Processing will be held in Beijing, China, on September 18–21, 2011. The workshop series is the major annual technical event of the IEEE Signal Processing Society's Technical Committee on Machine Learning for Signal Processing...

  3. Book: Marine Bioacoustic Signal Processing and Analysis

    Science.gov (United States)

    2011-09-30

    physicists , and mathematicians . However, more and more biologists and psychologists are starting to use advanced signal processing techniques and...Book: Marine Bioacoustic Signal Processing and Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT ...chapters than it should be, since the project must be finished by Dec. 31. I have started setting aside 2 hours of uninterrupted per workday to work

  4. Analogue Signal Processing: Collected Papers 1996-97

    DEFF Research Database (Denmark)

    1997-01-01

    This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of the Department of Information Technology, Technical University of Denmark, in 1996 and 1997.......This document is a collection of the papers presented at international conferences and in international journals by the analogue signal processing group of the Department of Information Technology, Technical University of Denmark, in 1996 and 1997....

  5. A combined model of sensory and cognitive representations underlying tonal expectations in music: from audio signals to behavior.

    Science.gov (United States)

    Collins, Tom; Tillmann, Barbara; Barrett, Frederick S; Delbé, Charles; Janata, Petr

    2014-01-01

    Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  6. Let Their Voices Be Heard! Building a Multicultural Audio Collection.

    Science.gov (United States)

    Tucker, Judith Cook

    1992-01-01

    Discusses building a multicultural audio collection for a library. Gives some guidelines about selecting materials that really represent different cultures. Audio materials that are considered fall roughly into the categories of children's stories, didactic materials, oral histories, poetry and folktales, and music. The goal is an authentic…

  7. Modeling laser velocimeter signals as triply stochastic Poisson processes

    Science.gov (United States)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  8. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  9. Low Latency Audio Video: Potentials for Collaborative Music Making through Distance Learning

    Science.gov (United States)

    Riley, Holly; MacLeod, Rebecca B.; Libera, Matthew

    2016-01-01

    The primary purpose of this study was to examine the potential of LOw LAtency (LOLA), a low latency audio visual technology designed to allow simultaneous music performance, as a distance learning tool for musical styles in which synchronous playing is an integral aspect of the learning process (e.g., jazz, folk styles). The secondary purpose was…

  10. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  11. Illustration of decimation in digital signal processing (DSP) systems ...

    African Journals Online (AJOL)

    ... and engineering, especially in the areas of communication and medicine. ... This multirate DSP had been found useful in application like digital audio, video and even GSM technology. The work is implemented using MATLABTM software.

  12. Multilevel inverter based class D audio amplifier for capacitive transducers

    DEFF Research Database (Denmark)

    Nielsen, Dennis; Knott, Arnold; Andersen, Michael A. E.

    2014-01-01

    The reduced semiconductor voltage stress makes the multilevel inverters especially interesting, when driving capacitive transducers for audio applications. A ± 300 V flying capacitor class D audio amplifier driving a 100 nF load in the midrange region of 0.1-3.5 kHz with Total Harmonic Distortion...

  13. The Use of Audio and Animation in Computer Based Instruction.

    Science.gov (United States)

    Koroghlanian, Carol; Klein, James D.

    This study investigated the effects of audio, animation, and spatial ability in a computer-based instructional program for biology. The program presented instructional material via test or audio with lean text and included eight instructional sequences presented either via static illustrations or animations. High school students enrolled in a…

  14. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    Science.gov (United States)

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  15. Ultra-Fast Optical Signal Processing in Nonlinear Silicon Waveguides

    DEFF Research Database (Denmark)

    Oxenløwe, Leif Katsuo; Galili, Michael; Pu, Minhao

    2011-01-01

    We describe recent demonstrations of exploiting highly nonlinear silicon nanowires for processing Tbit/s optical data signals. We perform demultiplexing and optical waveform sampling of 1.28 Tbit/s and wavelength conversion of 640 Gbit/s data signals.......We describe recent demonstrations of exploiting highly nonlinear silicon nanowires for processing Tbit/s optical data signals. We perform demultiplexing and optical waveform sampling of 1.28 Tbit/s and wavelength conversion of 640 Gbit/s data signals....

  16. A soft-core processor architecture optimised for radar signal processing applications

    CSIR Research Space (South Africa)

    Broich, R

    2013-12-01

    Full Text Available -performance soft-core processing architecture is proposed. To develop such a processing architecture, data and signal-flow characteristics of common radar signal processing algorithms are analysed. Each algorithm is broken down into signal processing...

  17. Decoding Signal Processing at the Single-Cell Level

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, H. Steven

    2017-12-01

    The ability of cells to detect and decode information about their extracellular environment is critical to generating an appropriate response. In multicellular organisms, cells must decode dozens of signals from their neighbors and extracellular matrix to maintain tissue homeostasis while still responding to environmental stressors. How cells detect and process information from their surroundings through a surprisingly limited number of signal transduction pathways is one of the most important question in biology. Despite many decades of research, many of the fundamental principles that underlie cell signal processing remain obscure. However, in this issue of Cell Systems, Gillies et al present compelling evidence that the early response gene circuit can act as a linear signal integrator, thus providing significant insight into how cells handle fluctuating signals and noise in their environment.

  18. SignalPlant: an open signal processing software platform

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Filip; Jurčo, Juraj; Halámek, Josef; Jurák, Pavel

    2016-01-01

    Roč. 37, č. 7 (2016), N38-N48 ISSN 0967-3334 R&D Projects: GA ČR GAP103/11/0933; GA MŠk(CZ) LO1212; GA ČR GAP102/12/2034 Institutional support: RVO:68081731 Keywords : data visualization * software * signal processing * ECG * EEG Subject RIV: FS - Medical Facilities ; Equipment Impact factor: 2.058, year: 2016

  19. El Digital Audio Tape Recorder. Contra autores y creadores

    Directory of Open Access Journals (Sweden)

    Jun Ono

    2015-01-01

    Full Text Available La llamada "DAT" (abreviatura por "digital audio tape recorder" / grabadora digital de audio ha recibido cobertura durante mucho tiempo en los medios masivos de Japón y otros países, como un producto acústico electrónico nuevo y controversial de la industria japonesa de artefactos electrónicos. ¿Qué ha pasado con el objeto de esta controversia?

  20. Digital signal processing for He3 proportional counter

    International Nuclear Information System (INIS)

    Zeynalov, Sh.S.; Ahmadov, Q.S.

    2010-01-01

    Full text : Data acquisition systems for nuclear spectroscopy have traditionally been based on systems with analog shaping amplifiers followed by analog-to-digital converters. Recently, however, new systems based on digital signal processing make possible to replace the analog shaping and timing circuitry the numerical algorithms to derive properties of the pulse such as its amplitude. DSP is a fully numerical analysis of the detector pulse signals and this technique demonstrates significant advantages over analog systems in some circumstances. From a mathematical point of view, one can consider the signal evolution from the detector to the ADC as a sequence of transformations that can be described by precisely defined mathematical expressions. Digital signal processing with ADCs has the possibility to utilize further information on the signal pulses from radiation detectors. In the experiment each step of the signal generation in the 3He filled proportional counter was described using digital signal processing techniques (DSP). The electronic system has consisted of a detector, a preamplifier and a digital oscilloscope. The pulses from the detector were digitized using a digital storage oscilloscope. This oscilloscope allowed signal digitization with accuracy of 8 bit (256 levels) and with frequency of up to 5 * 10 8 samples/s. As a neutron source was used Cf-252. To obtain detector output current pulse I(t) created by the motions of the ions/electrons pairs was written an algorithm which can easily be programmed using modern computer programming languages.

  1. Phonocardiography Signal Processing

    CERN Document Server

    Abbas, Abbas K

    2009-01-01

    The auscultation method is an important diagnostic indicator for hemodynamic anomalies. Heart sound classification and analysis play an important role in the auscultative diagnosis. The term phonocardiography refers to the tracing technique of heart sounds and the recording of cardiac acoustics vibration by means of a microphone-transducer. Therefore, understanding the nature and source of this signal is important to give us a tendency for developing a competent tool for further analysis and processing, in order to enhance and optimize cardiac clinical diagnostic approach. This book gives the

  2. Haptic and Audio Interaction Design

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 5th International Workshop on Haptic and Audio Interaction Design, HAID 2010 held in Copenhagen, Denmark, in September 2010. The 21 revised full papers presented were carefully reviewed and selected for inclusion in the book. The papers are or...

  3. Active noise cancellation in hearing devices

    DEFF Research Database (Denmark)

    2013-01-01

    Disclosed is a hearing device system comprising at least one hearing aid circuitry and at least one active noise cancellation unit, the at least one hearing aid circuitry comprises at least one input transducer adapted to convert a first audio signal to an electric audio signal; a signal processor...... connected to the at least one input transducer and adapted to process said electric audio signal by at least partially correcting for a hearing loss of a user; an output transducer adapted to generate from at least said processed electric audio signal a sound pressure in an ear canal of the user, whereby...... the generated sound pressure is at least partially corrected for the hearing loss of the user; ; the at least one active noise cancellation unit being adapted to provide an active noise cancellation signal adapted to perform active noise cancellation of an acoustical signal entering the ear canal in addition...

  4. Proceedings of IEEE Machine Learning for Signal Processing Workshop XVI

    DEFF Research Database (Denmark)

    Larsen, Jan

    These proceedings contains refereed papers presented at the sixteenth IEEE Workshop on Machine Learning for Signal Processing (MLSP'2006), held in Maynooth, Co. Kildare, Ireland, September 6-8, 2006. This is a continuation of the IEEE Workshops on Neural Networks for Signal Processing (NNSP......). The name of the Technical Committee, hence of the Workshop, was changed to Machine Learning for Signal Processing in September 2003 to better reflect the areas represented by the Technical Committee. The conference is organized by the Machine Learning for Signal Processing Technical Committee...... the same standard as the printed version and facilitates the reading and searching of the papers. The field of machine learning has matured considerably in both methodology and real-world application domains and has become particularly important for solution of problems in signal processing. As reflected...

  5. Digital signal processing algorithms for nuclear particle spectroscopy

    International Nuclear Information System (INIS)

    Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.

    2007-01-01

    Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature

  6. Registration and processing of acoustic signal in rock drilling

    Directory of Open Access Journals (Sweden)

    Futó Jozef

    2002-03-01

    Full Text Available For the determination of an effective rock disintegration for a given tool and rock type it is needed to define an optimal disintegration regime. Optimisation of the disintegration process by drilling denotes the finding out an appropriate couple of input parameters of disintegration, i.e. the thrust and revolutions for a quasi-equal rock environment. The disintegration process can be optimised to reach the maximum immediate drilling rate, to reach the minimum specific disintegration energy or to reach the maximum ratio of immediate drilling rate and specific disintegration energy. For the determination of the optimal thrust and revolutions it is needed to monitor the disintegration process. Monitoring of the disintegration process in real conditions is complicated by unfavourable factors, such as the presence of water, dust, vibrations etc. Following our present experience in the monitoring of drilling or full-profile driving, we try to replace the monitoring of input values by monitoring of the scanned acoustic signal. This method of monitoring can extend the optimisation of disintegration process in the technical practice. Its advantage consists in the registration of one acoustic signal by an appropriate microphone. Monitoring of acoustic signal is used also in monitoring of metal machining by milling and turning jobs. The research results of scanning of the acoustic signal in machining of metals are encouraging. Acoustic signal can be processed by different statistical parameters. The paper decribes some results of monitoring of the acoustic signal in rock disintegration on the drilling stand of the Institute of Geotechnics SAS in Košice. The acoustic signal has been registered and processed in no-load run of electric motor, in no-load run of electric motor with a drilling fluid, and in the Ruskov andesite drilling. Registration and processing of the acoustic signal is solved as a part of the research grant task within the basic research

  7. Deep Learning in Visual Computing and Signal Processing

    OpenAIRE

    Xie, Danfeng; Zhang, Lei; Bai, Li

    2017-01-01

    Deep learning is a subfield of machine learning, which aims to learn a hierarchy of features from input data. Nowadays, researchers have intensively investigated deep learning algorithms for solving challenging problems in many areas such as image classification, speech recognition, signal processing, and natural language processing. In this study, we not only review typical deep learning algorithms in computer vision and signal processing but also provide detailed information on how to apply...

  8. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  9. Detectors and signal processing for high-energy physics

    International Nuclear Information System (INIS)

    Rehak, P.

    1981-01-01

    Basic principles of the particle detection and signal processing for high-energy physics experiments are presented. It is shown that the optimum performance of a properly designed detector system is not limited by incidental imperfections, but solely by more fundamental limitations imposed by the quantum nature and statistical behavior of matter. The noise sources connected with the detection and signal processing are studied. The concepts of optimal filtering and optimal detector/amplifying device matching are introduced. Signal processing for a liquid argon calorimeter is analyzed in some detail. The position detection in gas counters is studied. Resolution in drift chambers for the drift coordinate measurement as well as the second coordinate measurement is discussed

  10. Responding Effectively to Composition Students: Comparing Student Perceptions of Written and Audio Feedback

    Science.gov (United States)

    Bilbro, J.; Iluzada, C.; Clark, D. E.

    2013-01-01

    The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…

  11. Digital signals processing using non-linear orthogonal transformation in frequency domain

    Directory of Open Access Journals (Sweden)

    Ivanichenko E.V.

    2017-12-01

    Full Text Available The rapid progress of computer technology in recent decades led to a wide introduction of methods of digital information processing practically in all fields of scientific research. In this case, among various applications of computing one of the most important places is occupied by digital processing systems signals (DSP that are used in data processing remote solution tasks of navigation of aerospace and marine objects, communications, radiophysics, digital optics and in a number of other applications. Digital Signal Processing (DSP is a dynamically developing an area that covers both technical and software tools. Related areas for digital signal processing are theory information, in particular, the theory of optimal signal reception and theory pattern recognition. In the first case, the main problem is signal extraction against a background of noise and interference of a different physical nature, and in the second - automatic recognition, i.e. classification and signal identification. In the digital processing of signals under a signal, we mean its mathematical description, i.e. a certain real function, containing information on the state or behavior of a physical system under an event that can be defined on a continuous or discrete space of time variation or spatial coordinates. In the broad sense, DSP systems mean a complex algorithmic, hardware and software. As a rule, systems contain specialized technical means of preliminary (or primary signal processing and special technical means for secondary processing of signals. Means of pretreatment are designed to process the original signals observed in general case against a background of random noise and interference of a different physical nature and represented in the form of discrete digital samples, for the purpose of detecting and selection (selection of the useful signal and evaluation characteristics of the detected signal. A new method of digital signal processing in the frequency

  12. Attracting and repelling in homogeneous signal processes

    International Nuclear Information System (INIS)

    Downarowicz, T; Grzegorek, P; Lacroix, Y

    2010-01-01

    Attracting and repelling are discussed on two levels: in abstract signal processes and in signal processes arising as returns to a fixed set in an ergodic dynamical system. In the first approach, among other things, we give three examples in which the sum of two Poisson (hence neutral—neither attracting nor repelling) processes comes out either neutral or attracting, or repelling, depending on how the two processes depend on each other. The main new result of the second type concerns so-called 'composite events' in the form of a union of all cylinders over blocks belonging to the δ-ball in the Hamming distance around a fixed block. We prove that in a typical ergodic nonperiodic process the majority of such 'composite events' reveal strong attracting. We discuss the practical interpretation of this result

  13. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  14. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  15. The Single- and Multichannel Audio Recordings Database (SMARD)

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Jesper Rindom; Jensen, Søren Holdt

    2014-01-01

    A new single- and multichannel audio recordings database (SMARD) is presented in this paper. The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four...... different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results....

  16. Multivariate Analysis for the Processing of Signals

    Directory of Open Access Journals (Sweden)

    Beattie J.R.

    2014-01-01

    Full Text Available Real-world experiments are becoming increasingly more complex, needing techniques capable of tracking this complexity. Signal based measurements are often used to capture this complexity, where a signal is a record of a sample’s response to a parameter (e.g. time, displacement, voltage, wavelength that is varied over a range of values. In signals the responses at each value of the varied parameter are related to each other, depending on the composition or state sample being measured. Since signals contain multiple information points, they have rich information content but are generally complex to comprehend. Multivariate Analysis (MA has profoundly transformed their analysis by allowing gross simplification of the tangled web of variation. In addition MA has also provided the advantage of being much more robust to the influence of noise than univariate methods of analysis. In recent years, there has been a growing awareness that the nature of the multivariate methods allows exploitation of its benefits for purposes other than data analysis, such as pre-processing of signals with the aim of eliminating irrelevant variations prior to analysis of the signal of interest. It has been shown that exploiting multivariate data reduction in an appropriate way can allow high fidelity denoising (removal of irreproducible non-signals, consistent and reproducible noise-insensitive correction of baseline distortions (removal of reproducible non-signals, accurate elimination of interfering signals (removal of reproducible but unwanted signals and the standardisation of signal amplitude fluctuations. At present, the field is relatively small but the possibilities for much wider application are considerable. Where signal properties are suitable for MA (such as the signal being stationary along the x-axis, these signal based corrections have the potential to be highly reproducible, and highly adaptable and are applicable in situations where the data is noisy or

  17. Nonlinear dynamic macromodeling techniques for audio systems

    Science.gov (United States)

    Ogrodzki, Jan; Bieńkowski, Piotr

    2015-09-01

    This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.

  18. Processing oscillatory signals by incoherent feedforward loops

    Science.gov (United States)

    Zhang, Carolyn; Wu, Feilun; Tsoi, Ryan; Shats, Igor; You, Lingchong

    From the timing of amoeba development to the maintenance of stem cell pluripotency,many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression.While networks underlying this signal decoding are diverse,many are built around a common motif, the incoherent feedforward loop (IFFL),where an input simultaneously activates an output and an inhibitor of the output.With appropriate parameters,this motif can generate temporal adaptation,where the system is desensitized to a sustained input.This property serves as the foundation for distinguishing signals with varying temporal profiles.Here,we use quantitative modeling to examine another property of IFFLs,the ability to process oscillatory signals.Our results indicate that the system's ability to translate pulsatile dynamics is limited by two constraints.The kinetics of IFFL components dictate the input range for which the network can decode pulsatile dynamics.In addition,a match between the network parameters and signal characteristics is required for optimal ``counting''.We elucidate one potential mechanism by which information processing occurs in natural networks with implications in the design of synthetic gene circuits for this purpose. This work was partially supported by the National Science Foundation Graduate Research Fellowship (CZ).

  19. 2012 Proceedings of the International Conference on Communications, Signal Processing, and Systems

    CERN Document Server

    Wang, Wei; Mu, Jiasong; Liang, Jing; Zhang, Baoju; Pi, Yiming; Zhao, Chenglin

    2012-01-01

    Communications, Signal Processing, and Systems is a collection of contributions coming out of the International Conference on Communications, Signal Processing, and Systems (CSPS) held October 2012. This book provides the state-of-art developments of Communications, Signal Processing, and Systems, and their interactions in multidisciplinary fields, such as Smart Grid. The book also examines Radar Systems, Sensor Networks, Radar Signal Processing, Design and Implementation of Signal Processing Systems and Applications. Written by experts and students in the fields of Communications, Signal Processing, and Systems.

  20. Engaging Students with Audio Feedback

    Science.gov (United States)

    Cann, Alan

    2014-01-01

    Students express widespread dissatisfaction with academic feedback. Teaching staff perceive a frequent lack of student engagement with written feedback, much of which goes uncollected or unread. Published evidence shows that audio feedback is highly acceptable to students but is underused. This paper explores methods to produce and deliver audio…

  1. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  2. Integration of top-down and bottom-up information for audio organization and retrieval

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand

    The increasing availability of digital audio and music calls for methods and systems to analyse and organize these digital objects. This thesis investigates three elements related to such systems focusing on the ability to represent and elicit the user's view on the multimedia object and the system...... output. The aim is to provide organization and processing, which aligns with the understanding and needs of the users. Audio and music is often characterized by the large amount of heterogenous information. The rst aspect investigated is the integration of such multi-variate and multi-modal information...... (indirect scaling). Inference is performed by analytical and simulation based methods, including the Laplace approximation and expectation propagation. In order to minimize the cost of the often expensive and lengthly experimentation, sequential experiment design or active learning is supported. The setup...

  3. Conflicting audio-haptic feedback in physically based simulation of walking sounds

    DEFF Research Database (Denmark)

    Turchet, Luca; Serafin, Stefania; Dimitrov, Smilen

    2010-01-01

    We describe an audio-haptic experiment conducted using a system which simulates in real-time the auditory and haptic sensation of walking on different surfaces. The system is based on physical models, that drive both the haptic and audio synthesizers, and a pair of shoes enhanced with sensors...... and actuators. Such experiment was run to examine the ability of subjects to recognize the different surfaces with both coherent and incoherent audio-haptic stimuli. Results show that in this kind of tasks the auditory modality is dominant on the haptic one....

  4. Discrete random signal processing and filtering primer with Matlab

    CERN Document Server

    Poularikas, Alexander D

    2013-01-01

    Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe

  5. Advanced Signal Processing for Wireless Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Xiaodong Wang

    2000-01-01

    Full Text Available There is at present a worldwide effort to develop next-generation wireless communication systems. It is envisioned that many of the future wireless systems will incorporate considerable signal-processing intelligence in order to provide advanced services such as multimedia transmission. In general, wireless channels can be very hostile media through which to communicate, due to substantial physical impediments, primarily radio-frequency interference and time-arying nature of the channel. The need of providing universal wireless access at high data-rate (which is the aim of many merging wireless applications presents a major technical challenge, and meeting this challenge necessitates the development of advanced signal processing techniques for multiple-access communications in non-stationary interference-rich environments. In this paper, we present some key advanced signal processing methodologies that have been developed in recent years for interference suppression in wireless networks. We will focus primarily on the problem of jointly suppressing multiple-access interference (MAI and intersymbol interference (ISI, which are the limiting sources of interference for the high data-rate wireless systems being proposed for many emerging application areas, such as wireless multimedia. We first present a signal subspace approach to blind joint suppression of MAI and ISI. We then discuss a powerful iterative technique for joint interference suppression and decoding, so-called Turbo multiuser detection, that is especially useful for wireless multimedia packet communications. We also discuss space-time processing methods that employ multiple antennas for interference rejection and signal enhancement. Finally, we touch briefly on the problems of suppressing narrowband interference and impulsive ambient noise, two other sources of radio-frequency interference present in wireless multimedia networks.

  6. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  7. Streamlining digital signal processing a tricks of the trade guidebook

    CERN Document Server

    2012-01-01

    Streamlining Digital Signal Processing, Second Edition, presents recent advances in DSP that simplify or increase the computational speed of common signal processing operations and provides practical, real-world tips and tricks not covered in conventional DSP textbooks. It offers new implementations of digital filter design, spectrum analysis, signal generation, high-speed function approximation, and various other DSP functions. It provides:Great tips, tricks of the trade, secrets, practical shortcuts, and clever engineering solutions from seasoned signal processing professionalsAn assortment.

  8. METHODS FOR QUALITY ENHANCEMENT OF USER VOICE SIGNAL IN VOICE AUTHENTICATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    O. N. Faizulaieva

    2014-03-01

    Full Text Available The reasonability for the usage of computer systems user voice in the authentication process is proved. The scientific task for improving the signal/noise ratio of the user voice signal in the authentication system is considered. The object of study is the process of input and output of the voice signal of authentication system user in computer systems and networks. Methods and means for input and extraction of voice signal against external interference signals are researched. Methods for quality enhancement of user voice signal in voice authentication systems are suggested. As modern computer facilities, including mobile ones, have two-channel audio card, the usage of two microphones is proposed in the voice signal input system of authentication system. Meanwhile, the task of forming a lobe of microphone array in a desired area of voice signal registration (100 Hz to 8 kHz is solved. The usage of directional properties of the proposed microphone array gives the possibility to have the influence of external interference signals two or three times less in the frequency range from 4 to 8 kHz. The possibilities for implementation of space-time processing of the recorded signals using constant and adaptive weighting factors are investigated. The simulation results of the proposed system for input and extraction of signals during digital processing of narrowband signals are presented. The proposed solutions make it possible to improve the value of the signal/noise ratio of the useful signals recorded up to 10, ..., 20 dB under the influence of external interference signals in the frequency range from 4 to 8 kHz. The results may be useful to specialists working in the field of voice recognition and speaker’s discrimination.

  9. Digital signal processing for He3 proportional counter

    International Nuclear Information System (INIS)

    Ahmadov, Q.S.; Institute of Radiation Problems, ANAS, Baku

    2011-01-01

    Full text: Data acquisition systems for nuclear spectroscopy have traditionally been based on systems with analog shaping amplifiers followed by analog-to-digital converters. Recently, however, new systems based on digital signal processing allow us to replace the analog shaping and timing circuitry the numerical algorithms to derive properties of the pulse such as its amplitude. DSP is a fully numerical analysis of the detector pulse signals and this technique demonstrates significant advantages over analog systems in some circumstances. From a mathematical point of view, one can consider the signal evolution from the detector to the ADC as a sequence of transformations that can be described by precisely defined mathematical expressions.Digital signal processing with ADCs has the possibility to utilize further information on the signal pulses from radiation detectors [1] [2]. In the experiment each step of the signal generation in the 3He filled proportional counter was described using digital signal processing techniques (DSP). The electronic system has consisted of a detector, a preamplifier and a digital oscilloscope. The pulses from the detector were digitized using a OTSZS-02 (250USB)-4 digital storage oscilloscope from ZAO R UDNEV-SHILYAYEV . This oscilloscope allowed signal digitization with accuracy of 8 bit(256 levels) and with frequency of up to 5.10''8 samples/s. As a neutron source was used Cf-252.To obtain detector output current pulse I(t) created by the motions of the ions/electrons pairs was written an algorithm which can easily be programmed using modern computer programming languages

  10. Effects of Hearing Protection Device Attenuation on Unmanned Aerial Vehicle (UAV) Audio Signatures

    Science.gov (United States)

    2016-03-01

    UAV ) Audio Signatures by Melissa Bezandry, Adrienne Raglin, and John Noble Approved for public release; distribution...Research Laboratory Effects of Hearing Protection Device Attenuation on Unmanned Aerial Vehicle ( UAV ) Audio Signatures by Melissa Bezandry...Aerial Vehicle ( UAV ) Audio Signatures 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Melissa Bezandry

  11. Biomedical signal acquisition, processing and transmission using smartphone

    International Nuclear Information System (INIS)

    Roncagliolo, Pablo; Arredondo, Luis; Gonzalez, AgustIn

    2007-01-01

    This article describes technical aspects involved in the programming of a system of acquisition, processing and transmission of biomedical signals by using mobile devices. This task is aligned with the permanent development of new technologies for the diagnosis and sickness treatment, based on the feasibility of measuring continuously different variables as electrocardiographic signals, blood pressure, oxygen concentration, pulse or simply temperature. The contribution of this technology is settled on its portability and low cost, which allows its massive use. Specifically this work analyzes the feasibility of acquisition and the processing of signals from a standard smartphone. Work results allow to state that nowadays these equipments have enough processing capacity to execute signals acquisition systems. These systems along with external servers make it possible to imagine a near future where the possibility of making continuous measures of biomedical variables will not be restricted only to hospitals but will also begin to be more frequently used in the daily life and at home

  12. Biomedical signal acquisition, processing and transmission using smartphone

    Energy Technology Data Exchange (ETDEWEB)

    Roncagliolo, Pablo [Department of Electronics, Universidad Tecnica Federico Santa Maria, Casilla 110-V, ValparaIso (Chile); Arredondo, Luis [Department of Biomedical Engineering, Universidad de ValparaIso, Casilla 123-V, ValparaIso (Chile); Gonzalez, AgustIn [Department of Electronics, Universidad Tecnica Federico Santa MarIa, Casilla 110-V, ValparaIso (Chile)

    2007-11-15

    This article describes technical aspects involved in the programming of a system of acquisition, processing and transmission of biomedical signals by using mobile devices. This task is aligned with the permanent development of new technologies for the diagnosis and sickness treatment, based on the feasibility of measuring continuously different variables as electrocardiographic signals, blood pressure, oxygen concentration, pulse or simply temperature. The contribution of this technology is settled on its portability and low cost, which allows its massive use. Specifically this work analyzes the feasibility of acquisition and the processing of signals from a standard smartphone. Work results allow to state that nowadays these equipments have enough processing capacity to execute signals acquisition systems. These systems along with external servers make it possible to imagine a near future where the possibility of making continuous measures of biomedical variables will not be restricted only to hospitals but will also begin to be more frequently used in the daily life and at home.

  13. Biomedical signal acquisition, processing and transmission using smartphone

    Science.gov (United States)

    Roncagliolo, Pablo; Arredondo, Luis; González, Agustín

    2007-11-01

    This article describes technical aspects involved in the programming of a system of acquisition, processing and transmission of biomedical signals by using mobile devices. This task is aligned with the permanent development of new technologies for the diagnosis and sickness treatment, based on the feasibility of measuring continuously different variables as electrocardiographic signals, blood pressure, oxygen concentration, pulse or simply temperature. The contribution of this technology is settled on its portability and low cost, which allows its massive use. Specifically this work analyzes the feasibility of acquisition and the processing of signals from a standard smartphone. Work results allow to state that nowadays these equipments have enough processing capacity to execute signals acquisition systems. These systems along with external servers make it possible to imagine a near future where the possibility of making continuous measures of biomedical variables will not be restricted only to hospitals but will also begin to be more frequently used in the daily life and at home.

  14. Multidimensional Signal Processing for Sensing & Communications

    Science.gov (United States)

    2015-07-29

    Spectrum Sensing,” submitted to IEEE Intl. Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Cancun, Mexico , 13-16 Dec. 2015...Sensing,” submitted to IEEE Intl. Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Cancun, Mexico , 13-16 Dec. 2015...diversity in echolocating mammals ,” IEEE Signal Processing Magazine, vol. 26, no. 1, pp. 65- 75, Jan. 2009. DISTRIBUTION A: Distribution approved for

  15. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  16. Linear signal processing using silicon micro-ring resonators

    DEFF Research Database (Denmark)

    Peucheret, Christophe; Ding, Yunhong; Ou, Haiyan

    2012-01-01

    We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping.......We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping....

  17. Music Identification System Using MPEG-7 Audio Signature Descriptors

    Science.gov (United States)

    You, Shingchern D.; Chen, Wei-Hwa; Chen, Woei-Kae

    2013-01-01

    This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query) audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system's database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control. PMID:23533359

  18. Music Identification System Using MPEG-7 Audio Signature Descriptors

    Directory of Open Access Journals (Sweden)

    Shingchern D. You

    2013-01-01

    Full Text Available This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system’s database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control.

  19. Technical Evaluation Report 31: Internet Audio Products (3/ 3

    Directory of Open Access Journals (Sweden)

    Jim Rudolph

    2004-08-01

    Full Text Available Two contrasting additions to the online audio market are reviewed: iVocalize, a browser-based audio-conferencing software, and Skype, a PC-to-PC Internet telephone tool. These products are selected for review on the basis of their success in gaining rapid popular attention and usage during 2003-04. The iVocalize review emphasizes the product’s role in the development of a series of successful online audio communities – notably several serving visually impaired users. The Skype review stresses the ease with which the product may be used for simultaneous PC-to-PC communication among up to five users. Editor’s Note: This paper serves as an introduction to reports about online community building, and reviews of online products for disabled persons, in the next ten reports in this series. JPB, Series Ed.

  20. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of