WorldWideScience

Sample records for noise robust automatic

  1. Robust Automatic Speech Recognition Features using Complex Wavelet Packet Transform Coefficients

    Directory of Open Access Journals (Sweden)

    TjongWan Sen

    2009-11-01

    Full Text Available To improve the performance of phoneme based Automatic Speech Recognition (ASR in noisy environment; we developed a new technique that could add robustness to clean phonemes features. These robust features are obtained from Complex Wavelet Packet Transform (CWPT coefficients. Since the CWPT coefficients represent all different frequency bands of the input signal, decomposing the input signal into complete CWPT tree would also cover all frequencies involved in recognition process. For time overlapping signals with different frequency contents, e. g. phoneme signal with noises, its CWPT coefficients are the combination of CWPT coefficients of phoneme signal and CWPT coefficients of noises. The CWPT coefficients of phonemes signal would be changed according to frequency components contained in noises. Since the numbers of phonemes in every language are relatively small (limited and already well known, one could easily derive principal component vectors from clean training dataset using Principal Component Analysis (PCA. These principal component vectors could be used then to add robustness and minimize noises effects in testing phase. Simulation results, using Alpha Numeric 4 (AN4 from Carnegie Mellon University and NOISEX-92 examples from Rice University, showed that this new technique could be used as features extractor that improves the robustness of phoneme based ASR systems in various adverse noisy conditions and still preserves the performance in clean environments.

  2. Optimizing edge detectors for robust automatic threshold selection : Coping with edge curvature and noise

    NARCIS (Netherlands)

    Wilkinson, M.H.F.

    The Robust Automatic Threshold Selection algorithm was introduced as a threshold selection based on a simple image statistic. The statistic is an average of the grey levels of the pixels in an image weighted by the response at each pixel of a specific edge detector. Other authors have suggested that

  3. Robustness of digitally modulated signal features against variation in HF noise model

    Directory of Open Access Journals (Sweden)

    Shoaib Mobien

    2011-01-01

    Full Text Available Abstract High frequency (HF band has both military and civilian uses. It can be used either as a primary or backup communication link. Automatic modulation classification (AMC is of an utmost importance in this band for the purpose of communications monitoring; e.g., signal intelligence and spectrum management. A widely used method for AMC is based on pattern recognition (PR. Such a method has two main steps: feature extraction and classification. The first step is generally performed in the presence of channel noise. Recent studies show that HF noise could be modeled by Gaussian or bi-kappa distributions, depending on day-time. Therefore, it is anticipated that change in noise model will have impact on features extraction stage. In this article, we investigate the robustness of well known digitally modulated signal features against variation in HF noise. Specifically, we consider temporal time domain (TTD features, higher order cumulants (HOC, and wavelet based features. In addition, we propose new features extracted from the constellation diagram and evaluate their robustness against the change in noise model. This study is targeting 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, and 64QAM modulations, as they are commonly used in HF communications.

  4. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  5. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  6. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling.

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  7. Noise-robust unsupervised spike sorting based on discriminative subspace learning with outlier handling

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2017-06-01

    Objective. Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. Most of the feature extraction and dimensionality reduction techniques that have been used for spike sorting give a projection subspace which is not necessarily the most discriminative one. Therefore, the clusters which appear inherently separable in some discriminative subspace may overlap if projected using conventional feature extraction approaches leading to a poor sorting accuracy especially when the noise level is high. In this paper, we propose a noise-robust and unsupervised spike sorting algorithm based on learning discriminative spike features for clustering. Approach. The proposed algorithm uses discriminative subspace learning to extract low dimensional and most discriminative features from the spike waveforms and perform clustering with automatic detection of the number of the clusters. The core part of the algorithm involves iterative subspace selection using linear discriminant analysis and clustering using Gaussian mixture model with outlier detection. A statistical test in the discriminative subspace is proposed to automatically detect the number of the clusters. Main results. Comparative results on publicly available simulated and real in vivo datasets demonstrate that our algorithm achieves substantially improved cluster distinction leading to higher sorting accuracy and more reliable detection of clusters which are highly overlapping and not detectable using conventional feature extraction techniques such as principal component analysis or wavelets. Significance. By providing more accurate information about the activity of more number of individual neurons with high robustness to neural noise and outliers, the proposed unsupervised spike sorting algorithm facilitates more detailed and accurate analysis of single- and multi-unit activities in neuroscience and brain machine interface studies.

  8. A blood pressure monitor with robust noise reduction system under linear cuff inflation and deflation.

    Science.gov (United States)

    Usuda, Takashi; Kobayashi, Naoki; Takeda, Sunao; Kotake, Yoshifumi

    2010-01-01

    We have developed the non-invasive blood pressure monitor which can measure the blood pressure quickly and robustly. This monitor combines two measurement mode: the linear inflation and the linear deflation. On the inflation mode, we realized a faster measurement with rapid inflation rate. On the deflation mode, we realized a robust noise reduction. When there is neither noise nor arrhythmia, the inflation mode incorporated on this monitor provides precise, quick and comfortable measurement. Once the inflation mode fails to calculate appropriate blood pressure due to body movement or arrhythmia, then the monitor switches automatically to the deflation mode and measure blood pressure by using digital signal processing as wavelet analysis, filter bank, filter combined with FFT and Inverse FFT. The inflation mode succeeded 2440 measurements out of 3099 measurements (79%) in an operating room and a rehabilitation room. The new designed blood pressure monitor provides the fastest measurement for patient with normal circulation and robust measurement for patients with body movement or severe arrhythmia. Also this fast measurement method provides comfortableness for patients.

  9. Robust image authentication in the presence of noise

    CERN Document Server

    2015-01-01

    This book addresses the problems that hinder image authentication in the presence of noise. It considers the advantages and disadvantages of existing algorithms for image authentication and shows new approaches and solutions for robust image authentication. The state of the art algorithms are compared and, furthermore, innovative approaches and algorithms are introduced. The introduced algorithms are applied to improve image authentication, watermarking and biometry.    Aside from presenting new directions and algorithms for robust image authentication in the presence of noise, as well as image correction, this book also:   Provides an overview of the state of the art algorithms for image authentication in the presence of noise and modifications, as well as a comparison of these algorithms, Presents novel algorithms for robust image authentication, whereby the image is tried to be corrected and authenticated, Examines different views for the solution of problems connected to image authentication in the pre...

  10. Robustness against parametric noise of nonideal holonomic gates

    International Nuclear Information System (INIS)

    Lupo, Cosmo; Aniello, Paolo; Napolitano, Mario; Florio, Giuseppe

    2007-01-01

    Holonomic gates for quantum computation are commonly considered to be robust against certain kinds of parametric noise, the cause of this robustness being the geometric character of the transformation achieved in the adiabatic limit. On the other hand, the effects of decoherence are expected to become more and more relevant when the adiabatic limit is approached. Starting from the system described by Florio et al. [Phys. Rev. A 73, 022327 (2006)], here we discuss the behavior of nonideal holonomic gates at finite operational time, i.e., long before the adiabatic limit is reached. We have considered several models of parametric noise and studied the robustness of finite-time gates. The results obtained suggest that the finite-time gates present some effects of cancellation of the perturbations introduced by the noise which mimic the geometrical cancellation effect of standard holonomic gates. Nevertheless, a careful analysis of the results leads to the conclusion that these effects are related to a dynamical instead of a geometrical feature

  11. Robustness against parametric noise of nonideal holonomic gates

    Science.gov (United States)

    Lupo, Cosmo; Aniello, Paolo; Napolitano, Mario; Florio, Giuseppe

    2007-07-01

    Holonomic gates for quantum computation are commonly considered to be robust against certain kinds of parametric noise, the cause of this robustness being the geometric character of the transformation achieved in the adiabatic limit. On the other hand, the effects of decoherence are expected to become more and more relevant when the adiabatic limit is approached. Starting from the system described by Florio [Phys. Rev. A 73, 022327 (2006)], here we discuss the behavior of nonideal holonomic gates at finite operational time, i.e., long before the adiabatic limit is reached. We have considered several models of parametric noise and studied the robustness of finite-time gates. The results obtained suggest that the finite-time gates present some effects of cancellation of the perturbations introduced by the noise which mimic the geometrical cancellation effect of standard holonomic gates. Nevertheless, a careful analysis of the results leads to the conclusion that these effects are related to a dynamical instead of a geometrical feature.

  12. THE NOISE IMMUNITY OF THE DIGITAL DEMODULATOR MFM-AM SIGNAL USED IN DATA COMMUNICATIONS SYSTEMS OF AIR TRAFFIC CONTROL WITH AUTOMATIC DEPENDENT SURVEILLANCE AGAINST A NON-GAUSSIAN NOISE

    Directory of Open Access Journals (Sweden)

    A. L. Senyavskiy

    2015-01-01

    Full Text Available The article analyzes the robustness of the digital demodulator of the signal with the lowest frequency shift keying at a subcarrier frequency with respect to non-Gaussian interference type of atmospheric, industrial noise and interfering frequency -and phase-shift keyed signals. This type of demodulator is used for the transmission of navigation data in the systems of air traffic control with automatic dependent surveillance.

  13. ADSL Transceivers Applying DSM and Their Nonstationary Noise Robustness

    Directory of Open Access Journals (Sweden)

    Bostoen Tom

    2006-01-01

    Full Text Available Dynamic spectrum management (DSM comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.

  14. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    Science.gov (United States)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  15. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    Science.gov (United States)

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing.

  16. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  17. Light field reconstruction robust to signal dependent noise

    Science.gov (United States)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  18. Automatic Offline Formulation of Robust Model Predictive Control Based on Linear Matrix Inequalities Method

    Directory of Open Access Journals (Sweden)

    Longge Zhang

    2013-01-01

    Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.

  19. A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.

    Science.gov (United States)

    Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing

    2018-03-07

    The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.

  20. Robust Cyclic MUSIC Algorithm for Finding Directions in Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Sen Li

    2017-01-01

    Full Text Available This paper addresses the issue of direction finding of a cyclostationary signal under impulsive noise environments modeled by α-stable distribution. Since α-stable distribution does not have finite second-order statistics, the conventional cyclic correlation-based signal-selective direction finding algorithms do not work effectively. To resolve this problem, we define two robust cyclic correlation functions which are derived from robust statistics property of the correntropy and the nonlinear transformation, respectively. The MUSIC algorithm with the robust cyclic correlation matrix of the received signals of arrays is then used to estimate the direction of cyclostationary signal in the presence of impulsive noise. The computer simulation results demonstrate that the two proposed robust cyclic correlation-based algorithms outperform the conventional cyclic correlation and the fractional lower order cyclic correlation based methods.

  1. Robustness of quantum correlations against linear noise

    International Nuclear Information System (INIS)

    Guo, Zhihua; Cao, Huaixin; Qu, Shixian

    2016-01-01

    Relative robustness of quantum correlations (RRoQC) of a bipartite state is firstly introduced relative to a classically correlated state. Robustness of quantum correlations (RoQC) of a bipartite state is then defined as the minimum of RRoQC of the state relative to all classically correlated ones. It is proved that as a function on quantum states, RoQC is nonnegative, lower semi-continuous and neither convex nor concave; especially, it is zero if and only if the state is classically correlated. Thus, RoQC not only quantifies the endurance of quantum correlations of a state against linear noise, but also can be used to distinguish between quantum and classically correlated states. Furthermore, the effects of local quantum channels on the robustness are explored and characterized. (paper)

  2. Mars - robust automatic backbone assignment of proteins

    International Nuclear Information System (INIS)

    Jung, Young-Sang; Zweckstetter, Markus

    2004-01-01

    MARS a program for robust automatic backbone assignment of 13 C/ 15 N labeled proteins is presented. MARS does not require tight thresholds for establishing sequential connectivity or detailed adjustment of these thresholds and it can work with a wide variety of NMR experiments. Using only 13 C α / 13 C β connectivity information, MARS allows automatic, error-free assignment of 96% of the 370-residue maltose-binding protein. MARS can successfully be used when data are missing for a substantial portion of residues or for proteins with very high chemical shift degeneracy such as partially or fully unfolded proteins. Other sources of information, such as residue specific information or known assignments from a homologues protein, can be included into the assignment process. MARS exports its result in SPARKY format. This allows visual validation and integration of automated and manual assignment

  3. Arduino-based noise robust online heart-rate detection.

    Science.gov (United States)

    Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda

    2017-04-01

    This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.

  4. Automatic Synthesis of Robust and Optimal Controllers

    DEFF Research Database (Denmark)

    Cassez, Franck; Jessen, Jan Jacob; Larsen, Kim Guldstrand

    2009-01-01

    In this paper, we show how to apply recent tools for the automatic synthesis of robust and near-optimal controllers for a real industrial case study. We show how to use three different classes of models and their supporting existing tools, Uppaal-TiGA for synthesis, phaver for verification......, and Simulink for simulation, in a complementary way. We believe that this case study shows that our tools have reached a level of maturity that allows us to tackle interesting and relevant industrial control problems....

  5. Robust extended Kalman filter of discrete-time Markovian jump nonlinear system under uncertain noise

    International Nuclear Information System (INIS)

    Zhu, Jin; Park, Jun Hong; Lee, Kwan Soo; Spiryagin, Maksym

    2008-01-01

    This paper examines the problem of robust extended Kalman filter design for discrete -time Markovian jump nonlinear systems with noise uncertainty. Because of the existence of stochastic Markovian switching, the state and measurement equations of underlying system are subject to uncertain noise whose covariance matrices are time-varying or un-measurable instead of stationary. First, based on the expression of filtering performance deviation, admissible uncertainty of noise covariance matrix is given. Secondly, two forms of noise uncertainty are taken into account: Non- Structural and Structural. It is proved by applying game theory that this filter design is a robust mini-max filter. A numerical example shows the validity of the method

  6. Markov random field based automatic image alignment for electron tomography.

    Science.gov (United States)

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  7. Sparse coding of the modulation spectrum for noise-robust automatic speech recognition

    NARCIS (Netherlands)

    Ahmadi, S.; Ahadi, S.M.; Cranen, B.; Boves, L.W.J.

    2014-01-01

    The full modulation spectrum is a high-dimensional representation of one-dimensional audio signals. Most previous research in automatic speech recognition converted this very rich representation into the equivalent of a sequence of short-time power spectra, mainly to simplify the computation of the

  8. Environmental Noise, Genetic Diversity and the Evolution of Evolvability and Robustness in Model Gene Networks

    Science.gov (United States)

    Steiner, Christopher F.

    2012-01-01

    The ability of organisms to adapt and persist in the face of environmental change is accepted as a fundamental feature of natural systems. More contentious is whether the capacity of organisms to adapt (or “evolvability”) can itself evolve and the mechanisms underlying such responses. Using model gene networks, I provide evidence that evolvability emerges more readily when populations experience positively autocorrelated environmental noise (red noise) compared to populations in stable or randomly varying (white noise) environments. Evolvability was correlated with increasing genetic robustness to effects on network viability and decreasing robustness to effects on phenotypic expression; populations whose networks displayed greater viability robustness and lower phenotypic robustness produced more additive genetic variation and adapted more rapidly in novel environments. Patterns of selection for robustness varied antagonistically with epistatic effects of mutations on viability and phenotypic expression, suggesting that trade-offs between these properties may constrain their evolutionary responses. Evolution of evolvability and robustness was stronger in sexual populations compared to asexual populations indicating that enhanced genetic variation under fluctuating selection combined with recombination load is a primary driver of the emergence of evolvability. These results provide insight into the mechanisms potentially underlying rapid adaptation as well as the environmental conditions that drive the evolution of genetic interactions. PMID:23284934

  9. Automatic exposure control systems designed to maintain constant image noise: effects on computed tomography dose and noise relative to clinically accepted technique charts.

    Science.gov (United States)

    Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; Kofler, James M; McCollough, Cynthia H

    2015-01-01

    To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise-based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects.

  10. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  11. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  12. Robust automatic high resolution segmentation of SOFC anode porosity in 3D

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Bowen, Jacob R.

    2008-01-01

    Routine use of 3D characterization of SOFCs by focused ion beam (FIB) serial sectioning is generally restricted by the time consuming task of manually delineating structures within each image slice. We apply advanced image analysis algorithms to automatically segment the porosity phase of an SOFC...... anode in 3D. The technique is based on numerical approximations to partial differential equations to evolve a 3D surface to the desired phase boundary. Vector fields derived from the experimentally acquired data are used as the driving force. The automatic segmentation compared to manual delineation...... reveals and good correspondence and the two approaches are quantitatively compared. It is concluded that the. automatic approach is more robust, more reproduceable and orders of magnitude quicker than manual segmentation of SOFC anode porosity for subsequent quantitative 3D analysis. Lastly...

  13. Aircraft noise effects on sleep: a systematic comparison of EEG awakenings and automatically detected cardiac activations

    International Nuclear Information System (INIS)

    Basner, Mathias; Müller, Uwe; Elmenhorst, Eva-Maria; Kluge, Götz; Griefahn, Barbara

    2008-01-01

    Polysomnography is the gold standard for investigating noise effects on sleep, but data collection and analysis are sumptuous and expensive. We recently developed an algorithm for the automatic identification of cardiac activations associated with cortical arousals, which uses heart rate information derived from a single electrocardiogram (ECG) channel. We hypothesized that cardiac activations can be used as estimates for EEG awakenings. Polysomnographic EEG awakenings and automatically detected cardiac activations were systematically compared using laboratory data of 112 subjects (47 male, mean ± SD age 37.9 ± 13 years), 985 nights and 23 855 aircraft noise events (ANEs). The probability of automatically detected cardiac activations increased monotonically with increasing maximum sound pressure levels of ANEs, exceeding the probability of EEG awakenings by up to 18.1%. If spontaneous reactions were taken into account, exposure–response curves were practically identical for EEG awakenings and cardiac activations. Automatically detected cardiac activations may be used as estimates for EEG awakenings. More investigations are needed to further validate the ECG algorithm in the field and to investigate inter-individual differences in its ability to predict EEG awakenings. This inexpensive, objective and non-invasive method facilitates large-scale field studies on the effects of traffic noise on sleep

  14. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    Science.gov (United States)

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  15. Experimental investigation of the robustness against noise for different Bell-type inequalities in three-qubit Greenberger-Horne-Zeilinger states

    International Nuclear Information System (INIS)

    Lu Huaixin; Zhao Jiaqiang; Cao Lianzhen; Wang Xiaoqin

    2011-01-01

    There are different families of inequalities that can be used to characterize the entanglement of multiqubit entangled states by the violation of quantum mechanics prediction versus local realism prediction. In a noisy environment, the violation of different inequalities distinguishes a direct from a noise-free environment. That is, each inequality has a different robustness against noise. We investigate theoretically and experimentally this proposition with the Mermin inequality, Bell inequality, and Svetlichny inequality using three-qubit GHZ states for different levels of noise. Our purpose is to determine which one of the inequalities is more robust against noise and thus more suitable to characterize entanglement of states. Our results show that the Mermin inequality is the most robust against stronger noise and is, thus, more suitable for characterizing the entanglement of three-qubit GHZ states in a noisy environment.

  16. Experimental investigation of the robustness against noise for different Bell-type inequalities in three-qubit Greenberger-Horne-Zeilinger states

    Energy Technology Data Exchange (ETDEWEB)

    Lu Huaixin; Zhao Jiaqiang; Cao Lianzhen; Wang Xiaoqin [Department of Physics and Electronic Science, Weifang University, Weifang, Shandong 261061 (China)

    2011-10-15

    There are different families of inequalities that can be used to characterize the entanglement of multiqubit entangled states by the violation of quantum mechanics prediction versus local realism prediction. In a noisy environment, the violation of different inequalities distinguishes a direct from a noise-free environment. That is, each inequality has a different robustness against noise. We investigate theoretically and experimentally this proposition with the Mermin inequality, Bell inequality, and Svetlichny inequality using three-qubit GHZ states for different levels of noise. Our purpose is to determine which one of the inequalities is more robust against noise and thus more suitable to characterize entanglement of states. Our results show that the Mermin inequality is the most robust against stronger noise and is, thus, more suitable for characterizing the entanglement of three-qubit GHZ states in a noisy environment.

  17. Long term memory for noise: evidence of robust encoding of very short temporal acoustic patterns.

    Directory of Open Access Journals (Sweden)

    Jayalakshmi Viswanathan

    2016-11-01

    Full Text Available Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs (the two halves of the noise were identical or 1-s plain random noises (Ns. Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin and scrambled (chopping sounds into 10- and 20-ms bits before shuffling versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant’s discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities.

  18. Robust cubature Kalman filter for GNSS/INS with missing observations and colored measurement noise.

    Science.gov (United States)

    Cui, Bingbo; Chen, Xiyuan; Tang, Xihua; Huang, Haoqian; Liu, Xiao

    2018-01-01

    In order to improve the accuracy of GNSS/INS working in GNSS-denied environment, a robust cubature Kalman filter (RCKF) is developed by considering colored measurement noise and missing observations. First, an improved cubature Kalman filter (CKF) is derived by considering colored measurement noise, where the time-differencing approach is applied to yield new observations. Then, after analyzing the disadvantages of existing methods, the measurement augment in processing colored noise is translated into processing the uncertainties of CKF, and new sigma point update framework is utilized to account for the bounded model uncertainties. By reusing the diffused sigma points and approximation residual in the prediction stage of CKF, the RCKF is developed and its error performance is analyzed theoretically. Results of numerical experiment and field test reveal that RCKF is more robust than CKF and extended Kalman filter (EKF), and compared with EKF, the heading error of land vehicle is reduced by about 72.4%. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. A multi-frame particle tracking algorithm robust against input noise

    International Nuclear Information System (INIS)

    Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei

    2008-01-01

    The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories

  20. The Effects of Background Noise on the Performance of an Automatic Speech Recogniser

    Science.gov (United States)

    Littlefield, Jason; HashemiSakhtsari, Ahmad

    2002-11-01

    Ambient or environmental noise is a major factor that affects the performance of an automatic speech recognizer. Large vocabulary, speaker-dependent, continuous speech recognizers are commercially available. Speech recognizers, perform well in a quiet environment, but poorly in a noisy environment. Speaker-dependent speech recognizers require training prior to them being tested, where the level of background noise in both phases affects the performance of the recognizer. This study aims to determine whether the best performance of a speech recognizer occurs when the levels of background noise during the training and test phases are the same, and how the performance is affected when the levels of background noise during the training and test phases are different. The relationship between the performance of the speech recognizer and upgrading the computer speed and amount of memory as well as software version was also investigated.

  1. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  2. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  3. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  4. Evidence of "hidden hearing loss" following noise exposures that produce robust TTS and ABR wave-I amplitude reductions.

    Science.gov (United States)

    Lobarinas, Edward; Spankovich, Christopher; Le Prell, Colleen G

    2017-06-01

    In animals, noise exposures that produce robust temporary threshold shifts (TTS) can produce immediate damage to afferent synapses and long-term degeneration of low spontaneous rate auditory nerve fibers. This synaptopathic damage has been shown to correlate with reduced auditory brainstem response (ABR) wave-I amplitudes at suprathreshold levels. The perceptual consequences of this "synaptopathy" remain unknown but have been suggested to include compromised hearing performance in competing background noise. Here, we used a modified startle inhibition paradigm to evaluate whether noise exposures that produce robust TTS and ABR wave-I reduction but not permanent threshold shift (PTS) reduced hearing-in-noise performance. Animals exposed to 109 dB SPL octave band noise showed TTS >30 dB 24-h post noise and modest but persistent ABR wave-I reduction 2 weeks post noise despite full recovery of ABR thresholds. Hearing-in-noise performance was negatively affected by the noise exposure. However, the effect was observed only at the poorest signal to noise ratio and was frequency specific. Although TTS >30 dB 24-h post noise was a predictor of functional deficits, there was no relationship between the degree of ABR wave-I reduction and degree of functional impairment. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Robust synchronization analysis in nonlinear stochastic cellular networks with time-varying delays, intracellular perturbations and intercellular noise.

    Science.gov (United States)

    Chen, Po-Wei; Chen, Bor-Sen

    2011-08-01

    Naturally, a cellular network consisted of a large amount of interacting cells is complex. These cells have to be synchronized in order to emerge their phenomena for some biological purposes. However, the inherently stochastic intra and intercellular interactions are noisy and delayed from biochemical processes. In this study, a robust synchronization scheme is proposed for a nonlinear stochastic time-delay coupled cellular network (TdCCN) in spite of the time-varying process delay and intracellular parameter perturbations. Furthermore, a nonlinear stochastic noise filtering ability is also investigated for this synchronized TdCCN against stochastic intercellular and environmental disturbances. Since it is very difficult to solve a robust synchronization problem with the Hamilton-Jacobi inequality (HJI) matrix, a linear matrix inequality (LMI) is employed to solve this problem via the help of a global linearization method. Through this robust synchronization analysis, we can gain a more systemic insight into not only the robust synchronizability but also the noise filtering ability of TdCCN under time-varying process delays, intracellular perturbations and intercellular disturbances. The measures of robustness and noise filtering ability of a synchronized TdCCN have potential application to the designs of neuron transmitters, on-time mass production of biochemical molecules, and synthetic biology. Finally, a benchmark of robust synchronization design in Escherichia coli repressilators is given to confirm the effectiveness of the proposed methods. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Multi-Phase Sub-Sampling Fractional-N PLL with soft loop switching for fast robust locking

    NARCIS (Netherlands)

    Liao, Dongyi; Dai, FA Foster; Nauta, Bram; Klumperink, Eric A.M.

    2017-01-01

    This paper presents a low phase noise sub-sampling PLL (SSPLL) with multi-phase outputs. Automatic soft switching between the sub-sampling phase loop and frequency loop is proposed to improve robustness against perturbations and interferences that may cause a traditional SSPLL to lose lock. A

  7. Concurrent Codes: A Holographic-Type Encoding Robust against Noise and Loss.

    Directory of Open Access Journals (Sweden)

    David M Benton

    Full Text Available Concurrent coding is an encoding scheme with 'holographic' type properties that are shown here to be robust against a significant amount of noise and signal loss. This single encoding scheme is able to correct for random errors and burst errors simultaneously, but does not rely on cyclic codes. A simple and practical scheme has been tested that displays perfect decoding when the signal to noise ratio is of order -18dB. The same scheme also displays perfect reconstruction when a contiguous block of 40% of the transmission is missing. In addition this scheme is 50% more efficient in terms of transmitted power requirements than equivalent cyclic codes. A simple model is presented that describes the process of decoding and can determine the computational load that would be expected, as well as describing the critical levels of noise and missing data at which false messages begin to be generated.

  8. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    Science.gov (United States)

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may

  9. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  10. A robust and coherent network statistic for detecting gravitational waves from inspiralling compact binaries in non-Gaussian noise

    CERN Document Server

    Bose, S

    2002-01-01

    The robust statistic proposed by Creighton (Creighton J D E 1999 Phys. Rev. D 60 021101) and Allen et al (Allen et al 2001 Preprint gr-gc/010500) for the detection of stationary non-Gaussian noise is briefly reviewed. We compute the robust statistic for generic weak gravitational-wave signals in the mixture-Gaussian noise model to an accuracy higher than in those analyses, and reinterpret its role. Specifically, we obtain the coherent statistic for detecting gravitational-wave signals from inspiralling compact binaries with an arbitrary network of earth-based interferometers. Finally, we show that excess computational costs incurred owing to non-Gaussianity is negligible compared to the cost of detection in Gaussian noise.

  11. Robust Sequential Covariance Intersection Fusion Kalman Filtering over Multi-agent Sensor Networks with Measurement Delays and Uncertain Noise Variances

    Institute of Scientific and Technical Information of China (English)

    QI Wen-Juan; ZHANG Peng; DENG Zi-Li

    2014-01-01

    This paper deals with the problem of designing robust sequential covariance intersection (SCI) fusion Kalman filter for the clustering multi-agent sensor network system with measurement delays and uncertain noise variances. The sensor network is partitioned into clusters by the nearest neighbor rule. Using the minimax robust estimation principle, based on the worst-case conservative sensor network system with conservative upper bounds of noise variances, and applying the unbiased linear minimum variance (ULMV) optimal estimation rule, we present the two-layer SCI fusion robust steady-state Kalman filter which can reduce communication and computation burdens and save energy sources, and guarantee that the actual filtering error variances have a less-conservative upper-bound. A Lyapunov equation method for robustness analysis is proposed, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented and the robust accuracy relations of the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the global SCI fuser is higher than those of the local SCI fusers and the robust accuracies of all SCI fusers are higher than that of each local robust Kalman filter. A simulation example for a tracking system verifies the robustness and robust accuracy relations.

  12. Comparison of PAM and CAP modulations robustness against mode partition noise in optical links

    Science.gov (United States)

    Stepniak, Grzegorz

    2017-08-01

    Mode partition noise (MPN) of the laser employed at the transmitter can significantly degrade the transmission performance. In the paper, we introduce a simulation model of MPN in vertical cavity surface emitting laser (VCSEL) and simulate transmission of pulse amplitude modulation (PAM) and carrierless amplitude phase (CAP) signals in multimode fiber (MMF) link. By turning off other effects, like relative intensity noise (RIN), we focus solely on the influence of MPN on transmission performance degradation. Robustness of modulation and equalization type against MPN is studied.

  13. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    Science.gov (United States)

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  14. Resolution and robustness to noise of the sensitivity-based method for microwave imaging with data acquired on cylindrical surfaces

    International Nuclear Information System (INIS)

    Zhang, Yifan; Tu, Sheng; Amineh, Reza K; Nikolova, Natalia K

    2012-01-01

    The spatial resolution limit of a Jacobian-based microwave imaging algorithm and its robustness to noise are evaluated. The focus here is on tomographic systems where the wideband data are acquired with a vertically scanned circular sensor array and at each scanning step a 2D image is reconstructed in the plane of the sensor array. The theoretical resolution is obtained as one-half of the maximum-frequency wavelength with far-zone data and about two-thirds of the array radius with near-zone data. Validation examples are given using analytical electromagnetic models. The algorithm is shown to be robust to noise when the response data are corrupted by Gaussian white noise. (paper)

  15. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  16. Robust synchronization control scheme of a population of nonlinear stochastic synthetic genetic oscillators under intrinsic and extrinsic molecular noise via quorum sensing.

    Science.gov (United States)

    Chen, Bor-Sen; Hsu, Chih-Yuan

    2012-10-26

    Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI

  17. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Science.gov (United States)

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as

  18. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  19. A robust and hierarchical approach for the automatic co-registration of intensity and visible images

    Science.gov (United States)

    González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José

    2012-09-01

    This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.

  20. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    Science.gov (United States)

    Takemiya, Tetsushi

    , and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite

  1. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system.

    Science.gov (United States)

    Liu, Yinlong; Song, Zhijian; Wang, Manning

    2017-12-01

    Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.

  2. Noise-robust cortical tracking of attended speech in real-world acoustic scenes

    DEFF Research Database (Denmark)

    Fuglsang, Søren; Dau, Torsten; Hjortkjær, Jens

    2017-01-01

    Selectively attending to one speaker in a multi-speaker scenario is thought to synchronize low-frequency cortical activity to the attended speech signal. In recent studies, reconstruction of speech from single-trial electroencephalogram (EEG) data has been used to decode which talker a listener...... is attending to in a two-talker situation. It is currently unclear how this generalizes to more complex sound environments. Behaviorally, speech perception is robust to the acoustic distortions that listeners typically encounter in everyday life, but it is unknown whether this is mirrored by a noise......-robust neural tracking of attended speech. Here we used advanced acoustic simulations to recreate real-world acoustic scenes in the laboratory. In virtual acoustic realities with varying amounts of reverberation and number of interfering talkers, listeners selectively attended to the speech stream...

  3. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Directory of Open Access Journals (Sweden)

    Jun Yi Wang

    Full Text Available Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation to 0.978 (for SegAdapter-corrected segmentation for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large

  4. Fully integrated low-noise readout circuit with automatic offset cancellation loop for capacitive microsensors.

    Science.gov (United States)

    Song, Haryong; Park, Yunjong; Kim, Hyungseup; Cho, Dong-Il Dan; Ko, Hyoungho

    2015-10-14

    Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL) for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR) logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS) process with an active area of 1.76 mm². The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of -250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms.

  5. Fully Integrated Low-Noise Readout Circuit with Automatic Offset Cancellation Loop for Capacitive Microsensors

    Directory of Open Access Journals (Sweden)

    Haryong Song

    2015-10-01

    Full Text Available Capacitive sensing schemes are widely used for various microsensors; however, such microsensors suffer from severe parasitic capacitance problems. This paper presents a fully integrated low-noise readout circuit with automatic offset cancellation loop (AOCL for capacitive microsensors. The output offsets of the capacitive sensing chain due to the parasitic capacitances and process variations are automatically removed using AOCL. The AOCL generates electrically equivalent offset capacitance and enables charge-domain fine calibration using a 10-bit R-2R digital-to-analog converter, charge-transfer switches, and a charge-storing capacitor. The AOCL cancels the unwanted offset by binary-search algorithm based on 10-bit successive approximation register (SAR logic. The chip is implemented using 0.18 μm complementary metal-oxide-semiconductor (CMOS process with an active area of 1.76 mm2. The power consumption is 220 μW with 3.3 V supply. The input parasitic capacitances within the range of −250 fF to 250 fF can be cancelled out automatically, and the required calibration time is lower than 10 ms.

  6. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.

  7. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    International Nuclear Information System (INIS)

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  8. Study of shift shock reduction of an automatic transmission using robust control; Robust seigyo wo mochiita ido hensokuki no hensoku shock teigen ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Oshima, K [JATCO Corp., Shizuoka (Japan); Totsuka, H; Sanada, K; Kitagawa, A [Tokyo Institute of Technology, Tokyo (Japan)

    1997-10-01

    To effectively reduce shift shock of an Automatic Transmission, we designed a feed-back controller that manipulates the hydraulic pressure of a clutch and input torque, and also controls the turbine revolution and output torque. We used robust control theory to consider the fluctuation of hydraulic characteristics and friction elements, and verified the effect of the controller by simulation and experiment. 1 ref., 11 figs.

  9. Robust driver heartbeat estimation: A q-Hurst exponent based automatic sensor change with interactive multi-model EKF.

    Science.gov (United States)

    Vrazic, Sacha

    2015-08-01

    Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions.

  10. Robust spinal cord resting-state fMRI using independent component analysis-based nuisance regression noise reduction.

    Science.gov (United States)

    Hu, Yong; Jin, Richu; Li, Guangsheng; Luk, Keith Dk; Wu, Ed X

    2018-04-16

    Physiological noise reduction plays a critical role in spinal cord (SC) resting-state fMRI (rsfMRI). To reduce physiological noise and increase the robustness of SC rsfMRI by using an independent component analysis (ICA)-based nuisance regression (ICANR) method. Retrospective. Ten healthy subjects (female/male = 4/6, age = 27 ± 3 years, range 24-34 years). 3T/gradient-echo echo planar imaging (EPI). We used three alternative methods (no regression [Nil], conventional region of interest [ROI]-based noise reduction method without ICA [ROI-based], and correction of structured noise using spatial independent component analysis [CORSICA]) to compare with the performance of ICANR. Reduction of the influence of physiological noise on the SC and the reproducibility of rsfMRI analysis after noise reduction were examined. The correlation coefficient (CC) was calculated to assess the influence of physiological noise. Reproducibility was calculated by intraclass correlation (ICC). Results from different methods were compared by one-way analysis of variance (ANOVA) with post-hoc analysis. No significant difference in cerebrospinal fluid (CSF) pulsation influence or tissue motion influence were found (P = 0.223 in CSF, P = 0.2461 in tissue motion) in the ROI-based (CSF: 0.122 ± 0.020; tissue motion: 0.112 ± 0.015), and Nil (CSF: 0.134 ± 0.026; tissue motion: 0.124 ± 0.019). CORSICA showed a significantly stronger influence of CSF pulsation and tissue motion (CSF: 0.166 ± 0.045, P = 0.048; tissue motion: 0.160 ± 0.032, P = 0.048) than Nil. ICANR showed a significantly weaker influence of CSF pulsation and tissue motion (CSF: 0.076 ± 0.007, P = 0.0003; tissue motion: 0.081 ± 0.014, P = 0.0182) than Nil. The ICC values in the Nil, ROI-based, CORSICA, and ICANR were 0.669, 0.645, 0.561, and 0.766, respectively. ICANR more effectively reduced physiological noise from both tissue motion and CSF pulsation than three alternative methods. ICANR increases the robustness of SC rsf

  11. A Noise Robust Statistical Texture Model

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Stegmann, Mikkel Bille; Larsen, Rasmus

    2002-01-01

    Appearance Models segmentation framework. This is accomplished by augmenting the model with an estimate of the covariance of the noise present in the training data. This results in a more compact model maximising the signal-to-noise ratio, thus favouring subspaces rich on signal, but low on noise......This paper presents a novel approach to the problem of obtaining a low dimensional representation of texture (pixel intensity) variation present in a training set after alignment using a Generalised Procrustes analysis.We extend the conventional analysis of training textures in the Active...

  12. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.

    Science.gov (United States)

    Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël

    2018-02-01

    During the last two decades, MRI has been increasingly used for providing valuable quantitative information about spinal cord morphometry, such as quantification of the spinal cord atrophy in various diseases. However, despite the significant improvement of MR sequences adapted to the spinal cord, automatic image processing tools for spinal cord MRI data are not yet as developed as for the brain. There is nonetheless great interest in fully automatic and fast processing methods to be able to propose quantitative analysis pipelines on large datasets without user bias. The first step of most of these analysis pipelines is to detect the spinal cord, which is challenging to achieve automatically across the broad range of MRI contrasts, field of view, resolutions and pathologies. In this paper, a fully automated, robust and fast method for detecting the spinal cord centerline on MRI volumes is introduced. The algorithm uses a global optimization scheme that attempts to strike a balance between a probabilistic localization map of the spinal cord center point and the overall spatial consistency of the spinal cord centerline (i.e. the rostro-caudal continuity of the spinal cord). Additionally, a new post-processing feature, which aims to automatically split brain and spine regions is introduced, to be able to detect a consistent spinal cord centerline, independently from the field of view. We present data on the validation of the proposed algorithm, known as "OptiC", from a large dataset involving 20 centers, 4 contrasts (T 2 -weighted n = 287, T 1 -weighted n = 120, T 2 ∗ -weighted n = 307, diffusion-weighted n = 90), 501 subjects including 173 patients with a variety of neurologic diseases. Validation involved the gold-standard centerline coverage, the mean square error between the true and predicted centerlines and the ability to accurately separate brain and spine regions. Overall, OptiC was able to cover 98.77% of the gold-standard centerline, with a

  13. SU-F-R-38: Impact of Smoothing and Noise On Robustness of CBCT Textural Features for Prediction of Response to Radiotherapy Treatment of Head and Neck Cancers

    Energy Technology Data Exchange (ETDEWEB)

    Bagher-Ebadian, H; Chetty, I; Liu, C; Movsas, B; Siddiqui, F [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: To examine the impact of image smoothing and noise on the robustness of textural information extracted from CBCT images for prediction of radiotherapy response for patients with head/neck (H/N) cancers. Methods: CBCT image datasets for 14 patients with H/N cancer treated with radiation (70 Gy in 35 fractions) were investigated. A deformable registration algorithm was used to fuse planning CT’s to CBCT’s. Tumor volume was automatically segmented on each CBCT image dataset. Local control at 1-year was used to classify 8 patients as responders (R), and 6 as non-responders (NR). A smoothing filter [2D Adaptive Weiner (2DAW) with 3 different windows (ψ=3, 5, and 7)], and two noise models (Poisson and Gaussian, SNR=25) were implemented, and independently applied to CBCT images. Twenty-two textural features, describing the spatial arrangement of voxel intensities calculated from gray-level co-occurrence matrices, were extracted for all tumor volumes. Results: Relative to CBCT images without smoothing, none of 22 textural features extracted showed any significant differences when smoothing was applied (using the 2DAW with filtering parameters of ψ=3 and 5), in the responder and non-responder groups. When smoothing, 2DAW with ψ=7 was applied, one textural feature, Information Measure of Correlation, was significantly different relative to no smoothing. Only 4 features (Energy, Entropy, Homogeneity, and Maximum-Probability) were found to be statistically different between the R and NR groups (Table 1). These features remained statistically significant discriminators for R and NR groups in presence of noise and smoothing. Conclusion: This preliminary work suggests that textural classifiers for response prediction, extracted from H&N CBCT images, are robust to low-power noise and low-pass filtering. While other types of filters will alter the spatial frequencies differently, these results are promising. The current study is subject to Type II errors. A much

  14. The area-of-interest problem in eyetracking research: A noise-robust solution for face and sparse stimuli.

    Science.gov (United States)

    Hessels, Roy S; Kemner, Chantal; van den Boomen, Carlijn; Hooge, Ignace T C

    2016-12-01

    A problem in eyetracking research is choosing areas of interest (AOIs): Researchers in the same field often use widely varying AOIs for similar stimuli, making cross-study comparisons difficult or even impossible. Subjective choices while choosing AOIs cause differences in AOI shape, size, and location. On the other hand, not many guidelines for constructing AOIs, or comparisons between AOI-production methods, are available. In the present study, we addressed this gap by comparing AOI-production methods in face stimuli, using data collected with infants and adults (with autism spectrum disorder [ASD] and matched controls). Specifically, we report that the attention-attracting and attention-maintaining capacities of AOIs differ between AOI-production methods, and that this matters for statistical comparisons in one of three groups investigated (the ASD group). In addition, we investigated the relation between AOI size and an AOI's attention-attracting and attention-maintaining capacities, as well as the consequences for statistical analyses, and report that adopting large AOIs solves the problem of statistical differences between the AOI methods. Finally, we tested AOI-production methods for their robustness to noise, and report that large AOIs-using the Voronoi tessellation method or the limited-radius Voronoi tessellation method with large radii-are most robust to noise. We conclude that large AOIs are a noise-robust solution in face stimuli and, when implemented using the Voronoi method, are the most objective of the researcher-defined AOIs. Adopting Voronoi AOIs in face-scanning research should allow better between-group and cross-study comparisons.

  15. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  16. Low-Power Photoplethysmogram Acquisition Integrated Circuit with Robust Light Interference Compensation.

    Science.gov (United States)

    Kim, Jongpal; Kim, Jihoon; Ko, Hyoungho

    2015-12-31

    To overcome light interference, including a large DC offset and ambient light variation, a robust photoplethysmogram (PPG) readout chip is fabricated using a 0.13-μm complementary metal-oxide-semiconductor (CMOS) process. Against the large DC offset, a saturation detection and current feedback circuit is proposed to compensate for an offset current of up to 30 μA. For robustness against optical path variation, an automatic emitted light compensation method is adopted. To prevent ambient light interference, an alternating sampling and charge redistribution technique is also proposed. In the proposed technique, no additional power is consumed, and only three differential switches and one capacitor are required. The PPG readout channel consumes 26.4 μW and has an input referred current noise of 260 pArms.

  17. The benefit obtained from visually displayed text from an automatic speech recognizer during listening to speech presented in noise

    NARCIS (Netherlands)

    Zekveld, A.A.; Kramer, S.E.; Kessens, J.M.; Vlaming, M.S.M.G.; Houtgast, T.

    2008-01-01

    OBJECTIVES: The aim of this study was to evaluate the benefit that listeners obtain from visually presented output from an automatic speech recognition (ASR) system during listening to speech in noise. DESIGN: Auditory-alone and audiovisual speech reception thresholds (SRTs) were measured. The SRT

  18. Robust random telegraph conductivity noise in single crystals of the ferromagnetic insulating manganite La0.86Ca0.14MnO3

    Science.gov (United States)

    Przybytek, J.; Fink-Finowicki, J.; Puźniak, R.; Shames, A.; Markovich, V.; Mogilyansky, D.; Jung, G.

    2017-03-01

    Robust random telegraph conductivity fluctuations have been observed in La0.86Ca0.14MnO3 manganite single crystals. At room temperatures, the spectra of conductivity fluctuations are featureless and follow a 1 /f shape in the entire experimental frequency and bias range. Upon lowering the temperature, clear Lorentzian bias-dependent excess noise appears on the 1 /f background and eventually dominates the spectral behavior. In the time domain, fully developed Lorentzian noise appears as pronounced two-level random telegraph noise with a thermally activated switching rate, which does not depend on bias current and applied magnetic field. The telegraph noise is very robust and persists in the exceptionally wide temperature range of more than 50 K. The amplitude of the telegraph noise decreases exponentially with increasing bias current in exactly the same manner as the sample resistance increases with the current, pointing out the dynamic current redistribution between percolation paths dominated by phase-separated clusters with different conductivity as a possible origin of two-level conductivity fluctuations.

  19. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  20. Nonlatching positive feedback enables robust bimodality by decoupling expression noise from the mean

    Energy Technology Data Exchange (ETDEWEB)

    Razooky, Brandon S. [Rockefeller Univ., New York, NY (United States). Lab. of Virology and Infectious Disease; Gladstone Institutes (Virology and Immunology), San Francisco, CA (United States); Univ. of California, San Francisco, CA (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Nanophase Materials Science (CNMS); Univ. of Tennessee, Knoxville, TN (United States). Bredesen Center for Interdisciplinary; Cao, Youfang [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hansen, Maike M. K. [Gladstone Institutes (Virology and Immunology), San Francisco, CA (United States); Perelson, Alan S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Simpson, Michael L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Nanophase Materials Science (CNMS); Univ. of Tennessee, Knoxville, TN (United States). Bredesen Center for Interdisciplinary; Weinberger, Leor S. [Gladstone Institutes (Virology and Immunology), San Francisco, CA (United States); Univ. of California, San Francisco, CA (United States). Dept. of Biochemistry and Biophysics; Univ. of California, San Francisco, CA (United States). QB3: California Inst. of Quantitative Biosciences; Univ. of California, San Francisco, CA (United States). Dept. of Pharmaceutical Chemistry

    2017-10-18

    Fundamental to biological decision-making is the ability to generate bimodal expression patterns where two alternate expression states simultaneously exist. Here in this study, we use a combination of single-cell analysis and mathematical modeling to examine the sources of bimodality in the transcriptional program controlling HIV’s fate decision between active replication and viral latency. We find that the HIV Tat protein manipulates the intrinsic toggling of HIV’s promoter, the LTR, to generate bimodal ON-OFF expression, and that transcriptional positive feedback from Tat shifts and expands the regime of LTR bimodality. This result holds for both minimal synthetic viral circuits and full-length virus. Strikingly, computational analysis indicates that the Tat circuit’s non-cooperative ‘non-latching’ feedback architecture is optimized to slow the promoter’s toggling and generate bimodality by stochastic extinction of Tat. In contrast to the standard Poisson model, theory and experiment show that non-latching positive feedback substantially dampens the inverse noise-mean relationship to maintain stochastic bimodality despite increasing mean-expression levels. Given the rapid evolution of HIV, the presence of a circuit optimized to robustly generate bimodal expression appears consistent with the hypothesis that HIV’s decision between active replication and latency provides a viral fitness advantage. More broadly, the results suggest that positive-feedback circuits may have evolved not only for signal amplification but also for robustly generating bimodality by decoupling expression fluctuations (noise) from mean expression levels.

  1. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    Science.gov (United States)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  2. Low-Power Photoplethysmogram Acquisition Integrated Circuit with Robust Light Interference Compensation

    Directory of Open Access Journals (Sweden)

    Jongpal Kim

    2015-12-01

    Full Text Available To overcome light interference, including a large DC offset and ambient light variation, a robust photoplethysmogram (PPG readout chip is fabricated using a 0.13-μm complementary metal–oxide–semiconductor (CMOS process. Against the large DC offset, a saturation detection and current feedback circuit is proposed to compensate for an offset current of up to 30 μA. For robustness against optical path variation, an automatic emitted light compensation method is adopted. To prevent ambient light interference, an alternating sampling and charge redistribution technique is also proposed. In the proposed technique, no additional power is consumed, and only three differential switches and one capacitor are required. The PPG readout channel consumes 26.4 μW and has an input referred current noise of 260 pArms.

  3. Automatic picker of P & S first arrivals and robust event locator

    Science.gov (United States)

    Pinsky, V.; Polozov, A.; Hofstetter, A.

    2003-12-01

    We report on further development of automatic all distances location procedure designed for a regional network. The procedure generalizes the previous "loca l" (R ratio of two STAs, calculated in two consecutive and equal time windows (instead of previously used Akike Information Criterion). "Teleseismic " location is split in two stages: preliminary and final one. The preliminary part estimates azimuth and apparent velocity by fitting a plane wave to the P automatic pickings. The apparent velocity criterion is used to decide about strategy of the following computations: teleseismic or regional. The preliminary estimates of azimuth and apparent velocity provide starting value for the final teleseismic and regional location. Apparent velocity is used to get first a pproximation distance to the source on the basis of the P, Pn, Pg travel-timetables. The distance estimate together with the preliminary azimuth estimate provides first approximations of the source latitude and longitude via sine and cosine theorems formulated for the spherical triangle. Final location is based on robust grid-search optimization procedure, weighting the number of pickings that simultaneously fit the model travel times. The grid covers initial location and becomes finer while approaching true hypocenter. The target function is a sum of the bell-shaped characteristic functions, used to emphasize true pickings and eliminate outliers. The final solution is a grid point that provides maximum to the target function. The procedure was applied to a list of ML > 4 earthquakes recorded by the Israel Seismic Network (ISN) in the 1999-2002 time period. Most of them are badly constrained relative the network. However, the results of location with average normalized error relative bulletin solutions e=dr/R of 5% were obtained, in each of the distance ranges. The first version of the procedure was incorporated in the national Early Warning System in 2001. Recently, we started to send automatic Early

  4. Automatically sweeping dual-channel boxcar integrator

    International Nuclear Information System (INIS)

    Keefe, D.J.; Patterson, D.R.

    1978-01-01

    An automatically sweeping dual-channel boxcar integrator has been developed to automate the search for a signal that repeatedly follows a trigger pulse by a constant or slowly varying time delay when that signal is completely hidden in random electrical noise and dc-offset drifts. The automatically sweeping dual-channel boxcar integrator improves the signal-to-noise ratio and eliminates dc-drift errors in the same way that a conventional dual-channel boxcar integrator does, but, in addition, automatically locates the hidden signal. When the signal is found, its time delay is displayed with 100-ns resolution, and its peak value is automatically measured and displayed. This relieves the operator of the tedious, time-consuming, and error-prone search for the signal whenever the time delay changes. The automatically sweeping boxcar integrator can also be used as a conventional dual-channel boxcar integrator. In either mode, it can repeatedly integrate a signal up to 990 times and thus make accurate measurements of the signal pulse height in the presence of random noise, dc offsets, and unsynchronized interfering signals

  5. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  6. GPU-accelerated automatic identification of robust beam setups for proton and carbon-ion radiotherapy

    International Nuclear Information System (INIS)

    Ammazzalorso, F; Jelen, U; Bednarz, T

    2014-01-01

    We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.

  7. GPU-accelerated automatic identification of robust beam setups for proton and carbon-ion radiotherapy

    Science.gov (United States)

    Ammazzalorso, F.; Bednarz, T.; Jelen, U.

    2014-03-01

    We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.

  8. Robust Circle Detection Using Harmony Search

    Directory of Open Access Journals (Sweden)

    Jaco Fourie

    2017-01-01

    Full Text Available Automatic circle detection is an important element of many image processing algorithms. Traditionally the Hough transform has been used to find circular objects in images but more modern approaches that make use of heuristic optimisation techniques have been developed. These are often used in large complex images where the presence of noise or limited computational resources make the Hough transform impractical. Previous research on the use of the Harmony Search (HS in circle detection showed that HS is an attractive alternative to many of the modern circle detectors based on heuristic optimisers like genetic algorithms and simulated annealing. We propose improvements to this work that enables our algorithm to robustly find multiple circles in larger data sets and still work on realistic images that are heavily corrupted by noisy edges.

  9. Robust statistics for deterministic and stochastic gravitational waves in non-Gaussian noise. II. Bayesian analyses

    International Nuclear Information System (INIS)

    Allen, Bruce; Creighton, Jolien D.E.; Flanagan, Eanna E.; Romano, Joseph D.

    2003-01-01

    In a previous paper (paper I), we derived a set of near-optimal signal detection techniques for gravitational wave detectors whose noise probability distributions contain non-Gaussian tails. The methods modify standard methods by truncating or clipping sample values which lie in those non-Gaussian tails. The methods were derived, in the frequentist framework, by minimizing false alarm probabilities at fixed false detection probability in the limit of weak signals. For stochastic signals, the resulting statistic consisted of a sum of an autocorrelation term and a cross-correlation term; it was necessary to discard 'by hand' the autocorrelation term in order to arrive at the correct, generalized cross-correlation statistic. In the present paper, we present an alternative derivation of the same signal detection techniques from within the Bayesian framework. We compute, for both deterministic and stochastic signals, the probability that a signal is present in the data, in the limit where the signal-to-noise ratio squared per frequency bin is small, where the signal is nevertheless strong enough to be detected (integrated signal-to-noise ratio large compared to 1), and where the total probability in the non-Gaussian tail part of the noise distribution is small. We show that, for each model considered, the resulting probability is to a good approximation a monotonic function of the detection statistic derived in paper I. Moreover, for stochastic signals, the new Bayesian derivation automatically eliminates the problematic autocorrelation term

  10. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  11. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    Science.gov (United States)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  12. Noise-robust speech triage.

    Science.gov (United States)

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  13. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  14. White noise theory of robust nonlinear filtering with correlated state and observation noises

    NARCIS (Netherlands)

    Bagchi, Arunabha; Karandikar, Rajeeva

    1992-01-01

    In the direct white noise theory of nonlinear filtering, the state process is still modeled as a Markov process satisfying an Ito stochastic differential equation, while a finitely additive white noise is used to model the observation noise. In the present work, this asymmetry is removed by modeling

  15. White noise theory of robust nonlinear filtering with correlated state and observation noises

    NARCIS (Netherlands)

    Bagchi, Arunabha; Karandikar, Rajeeva

    1994-01-01

    In the existing `direct¿ white noise theory of nonlinear filtering, the state process is still modelled as a Markov process satisfying an Itô stochastic differential equation, while a `finitely additive¿ white noise is used to model the observation noise. We remove this asymmetry by modelling the

  16. FIACH: A biophysical model for automatic retrospective noise control in fMRI.

    Science.gov (United States)

    Tierney, Tim M; Weiss-Croft, Louise J; Centeno, Maria; Shamshiri, Elhum A; Perani, Suejen; Baldeweg, Torsten; Clark, Christopher A; Carmichael, David W

    2016-01-01

    Different noise sources in fMRI acquisition can lead to spurious false positives and reduced sensitivity. We have developed a biophysically-based model (named FIACH: Functional Image Artefact Correction Heuristic) which extends current retrospective noise control methods in fMRI. FIACH can be applied to both General Linear Model (GLM) and resting state functional connectivity MRI (rs-fcMRI) studies. FIACH is a two-step procedure involving the identification and correction of non-physiological large amplitude temporal signal changes and spatial regions of high temporal instability. We have demonstrated its efficacy in a sample of 42 healthy children while performing language tasks that include overt speech with known activations. We demonstrate large improvements in sensitivity when FIACH is compared with current methods of retrospective correction. FIACH reduces the confounding effects of noise and increases the study's power by explaining significant variance that is not contained within the commonly used motion parameters. The method is particularly useful in detecting activations in inferior temporal regions which have proven problematic for fMRI. We have shown greater reproducibility and robustness of fMRI responses using FIACH in the context of task induced motion. In a clinical setting this will translate to increasing the reliability and sensitivity of fMRI used for the identification of language lateralisation and eloquent cortex. FIACH can benefit studies of cognitive development in young children, patient populations and older adults. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Automatic minimization of ocular artifacts fromelectroencephalogram: A novel approach by combining CompleteEEMD with Adaptive Noise and Renyi’s Entropy

    DEFF Research Database (Denmark)

    Guarascio, Mario; Puthusserypady, Sadasivan

    2017-01-01

    forminimizing the OAs from corrupted EEG signals. The RE criterion is suggested to automatically select theIntrinsic Mode Functions (IMFs) to reconstruct the artifact minimized EEG signals. The scheme requiresonly a single channel OAs corrupted EEG recording and a reasonable computation time. The methodis first......Ocular artifacts (OAs) are one of the major interferences that obscure electroencephalogram (EEG) signals.In this paper, a novel, completely automatic, adaptive and fast method that combines the CompleteEmpirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Renyi’s Entropy (RE) is proposed....... The method is compared to the one based on the CEEMDAN and manual choice of IMFsfor OAs minimization from EEG. Results from extensive simulation studies clearly indicate the efficacyof the proposed scheme in automatically minimizing the OAs from the corrupted EEG signals....

  18. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  19. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Christensen, Julie Anja Engelhard; Sørensen, Helge Bjarup Dissing

    2016-01-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography...... (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen’s kappa of 0.74 indicating substantial agreement between...

  20. Noise tolerant spatiotemporal chaos computing.

    Science.gov (United States)

    Kia, Behnam; Kia, Sarvenaz; Lindner, John F; Sinha, Sudeshna; Ditto, William L

    2014-12-01

    We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

  1. Comparison of models of automatic classification of textural patterns of mineral presents in Colombian coals

    International Nuclear Information System (INIS)

    Lopez Carvajal, Jaime; Branch Bedoya, John Willian

    2005-01-01

    The automatic classification of objects is a very interesting approach under several problem domains. This paper outlines some results obtained under different classification models to categorize textural patterns of minerals using real digital images. The data set used was characterized by a small size and noise presence. The implemented models were the Bayesian classifier, Neural Network (2-5-1), support vector machine, decision tree and 3-nearest neighbors. The results after applying crossed validation show that the Bayesian model (84%) proved better predictive capacity than the others, mainly due to its noise robustness behavior. The neuronal network (68%) and the SVM (67%) gave promising results, because they could be improved increasing the data amount used, while the decision tree (55%) and K-NN (54%) did not seem to be adequate for this problem, because of their sensibility to noise

  2. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: robust virtual sensor design.

    Science.gov (United States)

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-03-01

    The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America

  3. Robust Wavelet Estimation to Eliminate Simultaneously the Effects of Boundary Problems, Outliers, and Correlated Noise

    Directory of Open Access Journals (Sweden)

    Alsaidi M. Altaher

    2012-01-01

    Full Text Available Classical wavelet thresholding methods suffer from boundary problems caused by the application of the wavelet transformations to a finite signal. As a result, large bias at the edges and artificial wiggles occur when the classical boundary assumptions are not satisfied. Although polynomial wavelet regression and local polynomial wavelet regression effectively reduce the risk of this problem, the estimates from these two methods can be easily affected by the presence of correlated noise and outliers, giving inaccurate estimates. This paper introduces two robust methods in which the effects of boundary problems, outliers, and correlated noise are simultaneously taken into account. The proposed methods combine thresholding estimator with either a local polynomial model or a polynomial model using the generalized least squares method instead of the ordinary one. A primary step that involves removing the outlying observations through a statistical function is considered as well. The practical performance of the proposed methods has been evaluated through simulation experiments and real data examples. The results are strong evidence that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating the effects of outliers and correlated noise.

  4. Robust Automatic Target Recognition via HRRP Sequence Based on Scatterer Matching

    Directory of Open Access Journals (Sweden)

    Yuan Jiang

    2018-02-01

    Full Text Available High resolution range profile (HRRP plays an important role in wideband radar automatic target recognition (ATR. In order to alleviate the sensitivity to clutter and target aspect, employing a sequence of HRRP is a promising approach to enhance the ATR performance. In this paper, a novel HRRP sequence-matching method based on singular value decomposition (SVD is proposed. First, the HRRP sequence is decoupled into the angle space and the range space via SVD, which correspond to the span of the left and the right singular vectors, respectively. Second, atomic norm minimization (ANM is utilized to estimate dominant scatterers in the range space and the Hausdorff distance is employed to measure the scatter similarity between the test and training data. Next, the angle space similarity between the test and training data is evaluated based on the left singular vector correlations. Finally, the range space matching result and the angle space correlation are fused with the singular values as weights. Simulation and outfield experimental results demonstrate that the proposed matching metric is a robust similarity measure for HRRP sequence recognition.

  5. Automatic generation of anatomic characteristics from cerebral aneurysm surface models.

    Science.gov (United States)

    Neugebauer, M; Lawonn, K; Beuing, O; Preim, B

    2013-03-01

    Computer-aided research on cerebral aneurysms often depends on a polygonal mesh representation of the vessel lumen. To support a differentiated, anatomy-aware analysis, it is necessary to derive anatomic descriptors from the surface model. We present an approach on automatic decomposition of the adjacent vessels into near- and far-vessel regions and computation of the axial plane. We also exemplarily present two applications of the geometric descriptors: automatic computation of a unique vessel order and automatic viewpoint selection. Approximation methods are employed to analyze vessel cross-sections and the vessel area profile along the centerline. The resulting transition zones between near- and far- vessel regions are used as input for an optimization process to compute the axial plane. The unique vessel order is defined via projection into the plane space of the axial plane. The viewing direction for the automatic viewpoint selection is derived from the normal vector of the axial plane. The approach was successfully applied to representative data sets exhibiting a broad variability with respect to the configuration of their adjacent vessels. A robustness analysis showed that the automatic decomposition is stable against noise. A survey with 4 medical experts showed a broad agreement with the automatically defined transition zones. Due to the general nature of the underlying algorithms, this approach is applicable to most of the likely aneurysm configurations in the cerebral vasculature. Additional geometric information obtained during automatic decomposition can support correction in case the automatic approach fails. The resulting descriptors can be used for various applications in the field of visualization, exploration and analysis of cerebral aneurysms.

  6. Robust quantum secure direct communication and authentication protocol against decoherence noise based on six-qubit DF state

    International Nuclear Information System (INIS)

    Chang Yan; Zhang Shi-Bin; Yan Li-Li; Han Gui-Hua

    2015-01-01

    By using six-qubit decoherence-free (DF) states as quantum carriers and decoy states, a robust quantum secure direct communication and authentication (QSDCA) protocol against decoherence noise is proposed. Four six-qubit DF states are used in the process of secret transmission, however only the |0′〉 state is prepared. The other three six-qubit DF states can be obtained by permuting the outputs of the setup for |0′〉. By using the |0′〉 state as the decoy state, the detection rate and the qubit error rate reach 81.3%, and they will not change with the noise level. The stability and security are much higher than those of the ping–pong protocol both in an ideal scenario and a decoherence noise scenario. Even if the eavesdropper measures several qubits, exploiting the coherent relationship between these qubits, she can gain one bit of secret information with probability 0.042. (paper)

  7. Robust surface registration using N-points approximate congruent sets

    Directory of Open Access Journals (Sweden)

    Yao Jian

    2011-01-01

    Full Text Available Abstract Scans acquired by 3D sensors are typically represented in a local coordinate system. When multiple scans, taken from different locations, represent the same scene these must be registered to a common reference frame. We propose a fast and robust registration approach to automatically align two scans by finding two sets of N-points, that are approximately congruent under rigid transformation and leading to a good estimate of the transformation between their corresponding point clouds. Given two scans, our algorithm randomly searches for the best sets of congruent groups of points using a RANSAC-based approach. To successfully and reliably align two scans when there is only a small overlap, we improve the basic RANSAC random selection step by employing a weight function that approximates the probability of each pair of points in one scan to match one pair in the other. The search time to find pairs of congruent sets of N-points is greatly reduced by employing a fast search codebook based on both binary and multi-dimensional lookup tables. Moreover, we introduce a novel indicator of the overlapping region quality which is used to verify the estimated rigid transformation and to improve the alignment robustness. Our framework is general enough to incorporate and efficiently combine different point descriptors derived from geometric and texture-based feature points or scene geometrical characteristics. We also present a method to improve the matching effectiveness of texture feature descriptors by extracting them from an atlas of rectified images recovered from the scan reflectance image. Our algorithm is robust with respect to different sampling densities and also resilient to noise and outliers. We demonstrate its robustness and efficiency on several challenging scan datasets with varying degree of noise, outliers, extent of overlap, acquired from indoor and outdoor scenarios.

  8. Active Noise Control for Dishwasher noise

    Science.gov (United States)

    Lee, Nokhaeng; Park, Youngjin

    2016-09-01

    The dishwasher is a useful home appliance and continually used for automatically washing dishes. It's commonly placed in the kitchen with built-in style for practicality and better use of space. In this environment, people are easily exposed to dishwasher noise, so it is an important issue for the consumers, especially for the people living in open and narrow space. Recently, the sound power levels of the noise are about 40 - 50 dBA. It could be achieved by removal of noise sources and passive means of insulating acoustical path. For more reduction, such a quiet mode with the lower speed of cycle has been introduced, but this deteriorates the washing capacity. Under this background, we propose active noise control for dishwasher noise. It is observed that the noise is propagating mainly from the lower part of the front side. Control speakers are placed in the part for the collocation. Observation part of estimating sound field distribution and control part of generating the anti-noise are designed for active noise control. Simulation result shows proposed active noise control scheme could have a potential application for dishwasher noise reduction.

  9. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    Science.gov (United States)

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Noise-aware dictionary-learning-based sparse representation framework for detection and removal of single and combined noises from ECG signal.

    Science.gov (United States)

    Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M

    2017-02-01

    Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.

  11. Robust Automatic Modulation Classification Technique for Fading Channels via Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Jung Hwan Lee

    2017-08-01

    Full Text Available In this paper, we propose a deep neural network (DNN-based automatic modulation classification (AMC for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN channels, classification accuracy degrades for fading channels where the amplitude and phase of channel gain change in time. The key contributions of this paper are in two phases. First, we analyze the effectiveness of a variety of statistical features for AMC task in fading channels. We reveal that the features that are shown to be effective for fading channels are different from those known to be good for AWGN channels. Second, we introduce a new enhanced AMC technique based on DNN method. We use the extensive and diverse set of statistical features found in our study for the DNN-based classifier. The fully connected feedforward network with four hidden layers are trained to classify the modulation class for several fading scenarios. Numerical evaluation shows that the proposed technique offers significant performance gain over the existing AMC methods in fading channels.

  12. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    Science.gov (United States)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

  13. MIMO scheme performance and detection in epsilon noise

    OpenAIRE

    Stepanov, Sander

    2006-01-01

    New approach for analysis and decoding MIMO signaling is developed for usual model of nongaussion noise consists of background and impulsive noise named epsilon - noise. It is shown that non-gaussion noise performance significantly worse than gaussion ones. Stimulation results strengthen out theory. Robust in statistical sense detection rule is suggested for such kind of noise features much best robust detector performance than detector designed for Gaussian noise in impulsive environment and...

  14. Robust automatic control system of vessel descent-rise device for plant with distributed parameters “cable – towed underwater vehicle”

    Science.gov (United States)

    Chupina, K. V.; Kataev, E. V.; Khannanov, A. M.; Korshunov, V. N.; Sennikov, I. A.

    2018-05-01

    The paper is devoted to a problem of synthesis of the robust control system for a distributed parameters plant. The vessel descent-rise device has a heave compensation function for stabilization of the towed underwater vehicle on a set depth. A sea state code, parameters of the underwater vehicle and cable vary during underwater operations, the vessel heave is a stochastic process. It means that the plant and external disturbances have uncertainty. That is why it is necessary to use the robust theory for synthesis of an automatic control system, but without use of traditional methods of optimization, because this cable has distributed parameters. The offered technique has allowed one to design an effective control system for stabilization of immersion depth of the towed underwater vehicle for various degrees of sea roughness and to provide its robustness to deviations of parameters of the vehicle and cable’s length.

  15. Stochastic models of cellular circadian rhythms in plants help to understand the impact of noise on robustness and clock structure

    Directory of Open Access Journals (Sweden)

    Maria Luisa eGuerriero

    2014-10-01

    Full Text Available Rhythmic behavior is essential for plants; for example, daily (circadian rhythms control photosynthesis and seasonal rhythms regulate their life cycle. The core of the circadian clock is a genetic network that coordinates the expression of specific clock genes in a circadian rhythm reflecting the 24-hour day/night cycle.Circadian clocks exhibit stochastic noise due to the low copy numbers of clock genes and the consequent cell-to-cell variation: this intrinsic noise plays a major role in circadian clocks by inducing more robust oscillatory behavior. Another source of noise is the environment, which causes variation in temperature and light intensity: this extrinsic noise is part of the requirement for the structural complexity of clock networks.Advances in experimental techniques now permit single-cell measurements and the development of single-cell models. Here we present some modeling studies showing the importance of considering both types of noise in understanding how plants adapt to regular and irregular light variations. Stochastic models have proven useful for understanding the effect of regular variations. By contrast, the impact of irregular variations and the interaction of different noise sources are less studied.

  16. Robust sequential learning of feedforward neural networks in the presence of heavy-tailed noise.

    Science.gov (United States)

    Vuković, Najdan; Miljković, Zoran

    2015-03-01

    Feedforward neural networks (FFNN) are among the most used neural networks for modeling of various nonlinear problems in engineering. In sequential and especially real time processing all neural networks models fail when faced with outliers. Outliers are found across a wide range of engineering problems. Recent research results in the field have shown that to avoid overfitting or divergence of the model, new approach is needed especially if FFNN is to run sequentially or in real time. To accommodate limitations of FFNN when training data contains a certain number of outliers, this paper presents new learning algorithm based on improvement of conventional extended Kalman filter (EKF). Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is not constant; the sequence of noise measurement covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is modeled as inverse Wishart distribution. In each iteration EKF-OR simultaneously estimates noise estimates and current best estimate of FFNN parameters. Bayesian framework enables one to mathematically derive expressions, while analytical intractability of the Bayes' update step is solved by using structured variational approximation. All mathematical expressions in the paper are derived using the first principles. Extensive experimental study shows that FFNN trained with developed learning algorithm, achieves low prediction error and good generalization quality regardless of outliers' presence in training data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Research on influence of gear parameters on noise, vibrations and harshness conditions for automatic transmissions run-off cycle

    Directory of Open Access Journals (Sweden)

    Pascalau Nelu

    2017-01-01

    Full Text Available Noise vibration harshness (NVH defines, as a whole, that specific field within automotive industry, that studies mostly the noise and vibrations for different assemblies (such as chassis or drivetrain – gearbox or complete vehicles, particularly cars and trucks. Gear quality parameters have been studied and it has been experienced that these parameters have an important relevance for NVH topic. Therefore, this paper introduces a case-study, as to highlight the influence of two of these parameters, profile angle deviation (fHα and tooth trace angle deviation (fHβ, on run-off cycle on test benches, for high-performance automatic transmission, designed for passenger vehicles. The demand for high accuracy is mandatory, so fine adjustments are required, as could be further observed, in order to accomplish the requirements for a lower NVH run-off rate, while the whole life-time.

  18. An Automatic K-Means Clustering Algorithm of GPS Data Combining a Novel Niche Genetic Algorithm with Noise and Density

    Directory of Open Access Journals (Sweden)

    Xiangbing Zhou

    2017-12-01

    Full Text Available Rapidly growing Global Positioning System (GPS data plays an important role in trajectory and their applications (e.g., GPS-enabled smart devices. In order to employ K-means to mine the better origins and destinations (OD behind the GPS data and overcome its shortcomings including slowness of convergence, sensitivity to initial seeds selection, and getting stuck in a local optimum, this paper proposes and focuses on a novel niche genetic algorithm (NGA with density and noise for K-means clustering (NoiseClust. In NoiseClust, an improved noise method and K-means++ are proposed to produce the initial population and capture higher quality seeds that can automatically determine the proper number of clusters, and also handle the different sizes and shapes of genes. A density-based method is presented to divide the number of niches, with its aim to maintain population diversity. Adaptive probabilities of crossover and mutation are also employed to prevent the convergence to a local optimum. Finally, the centers (the best chromosome are obtained and then fed into the K-means as initial seeds to generate even higher quality clustering results by allowing the initial seeds to readjust as needed. Experimental results based on taxi GPS data sets demonstrate that NoiseClust has high performance and effectiveness, and easily mine the city’s situations in four taxi GPS data sets.

  19. Automatic Detection of P and S Phases by Support Vector Machine

    Science.gov (United States)

    Jiang, Y.; Ning, J.; Bao, T.

    2017-12-01

    Many methods in seismology rely on accurately picked phases. A well performed program on automatically phase picking will assure the application of these methods. Related researches before mostly focus on finding different characteristics between noise and phases, which are all not enough successful. We have developed a new method which mainly based on support vector machine to detect P and S phases. In it, we first input some waveform pieces into the support vector machine, then employ it to work out a hyper plane which can divide the space into two parts: respectively noise and phase. We further use the same method to find a hyper plane which can separate the phase space into P and S parts based on the three components' cross-correlation matrix. In order to further improve the ability of phase detection, we also employ array data. At last, we show that the overall effect of our method is robust by employing both synthetic and real data.

  20. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-01-01

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  1. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Taishan Medical University, Taian, Shandong (China); Washington University in St Louis, St Louis, MO (United States); Li, H. Harlod; Zhang, T; Yang, D [Washington University in St Louis, St Louis, MO (United States); Ma, F [Taishan Medical University, Taian, Shandong (China)

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  2. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    Energy Technology Data Exchange (ETDEWEB)

    Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.

  3. Robust estimation of adaptive tensors of curvature by tensor voting.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  4. Automatic physical inference with information maximizing neural networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  5. Multi-stream LSTM-HMM decoding and histogram equalization for noise robust keyword spotting.

    Science.gov (United States)

    Wöllmer, Martin; Marchi, Erik; Squartini, Stefano; Schuller, Björn

    2011-09-01

    Highly spontaneous, conversational, and potentially emotional and noisy speech is known to be a challenge for today's automatic speech recognition (ASR) systems, which highlights the need for advanced algorithms that improve speech features and models. Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components. In this article, we propose to combine histogram equalization and multi-condition training for robust keyword detection in noisy speech. To better cope with conversational speaking styles, we show how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network. The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".

  6. Robust shot-noise measurement for continuous-variable quantum key distribution

    Science.gov (United States)

    Kunz-Jacques, Sébastien; Jouguet, Paul

    2015-02-01

    We study a practical method to measure the shot noise in real time in continuous-variable quantum key distribution systems. The amount of secret key that can be extracted from the raw statistics depends strongly on this quantity since it affects in particular the computation of the excess noise (i.e., noise in excess of the shot noise) added by an eavesdropper on the quantum channel. Some powerful quantum hacking attacks relying on faking the estimated value of the shot noise to hide an intercept and resend strategy were proposed. Here, we provide experimental evidence that our method can defeat the saturation attack and the wavelength attack.

  7. LTD windows of the STDP learning rule and synaptic connections having a large transmission delay enable robust sequence learning amid background noise.

    Science.gov (United States)

    Hayashi, Hatsuo; Igarashi, Jun

    2009-06-01

    Spike-timing-dependent synaptic plasticity (STDP) is a simple and effective learning rule for sequence learning. However, synapses being subject to STDP rules are readily influenced in noisy circumstances because synaptic conductances are modified by pre- and postsynaptic spikes elicited within a few tens of milliseconds, regardless of whether those spikes convey information or not. Noisy firing existing everywhere in the brain may induce irrelevant enhancement of synaptic connections through STDP rules and would result in uncertain memory encoding and obscure memory patterns. We will here show that the LTD windows of the STDP rules enable robust sequence learning amid background noise in cooperation with a large signal transmission delay between neurons and a theta rhythm, using a network model of the entorhinal cortex layer II with entorhinal-hippocampal loop connections. The important element of the present model for robust sequence learning amid background noise is the symmetric STDP rule having LTD windows on both sides of the LTP window, in addition to the loop connections having a large signal transmission delay and the theta rhythm pacing activities of stellate cells. Above all, the LTD window in the range of positive spike-timing is important to prevent influences of noise with the progress of sequence learning.

  8. An Overview of the Adaptive Robust DFT

    Directory of Open Access Journals (Sweden)

    Djurović Igor

    2010-01-01

    Full Text Available Abstract This paper overviews basic principles and applications of the robust DFT (RDFT approach, which is used for robust processing of frequency-modulated (FM signals embedded in non-Gaussian heavy-tailed noise. In particular, we concentrate on the spectral analysis and filtering of signals corrupted by impulsive distortions using adaptive and nonadaptive robust estimators. Several adaptive estimators of location parameter are considered, and it is shown that their application is preferable with respect to non-adaptive counterparts. This fact is demonstrated by efficiency comparison of adaptive and nonadaptive RDFT methods for different noise environments.

  9. Robust Multiparty Quantum Secret Key Sharing Over Two Collective-Noise Channels via Three-Photon Mixed States

    International Nuclear Information System (INIS)

    Wang Zhangyin; Yuan Hao; Gao Gan; Shi Shouhua

    2006-01-01

    We present a robust (n,n)-threshold scheme for multiparty quantum secret sharing of key over two collective-noise channels (i.e., the collective dephasing channel and the collective rotating channel) via three-photon mixed states. In our scheme, only if all the sharers collaborate together can they establish a joint key with the message sender and extract the secret message from the sender's encrypted message. This scheme can be implemented using only a Bell singlet, a one-qubit state and polarization identification of single photon, so it is completely feasible according to the present-day technique.

  10. Feasibility of online IMPT adaptation using fast, automatic and robust dose restoration

    Science.gov (United States)

    Bernatowicz, Kinga; Geets, Xavier; Barragan, Ana; Janssens, Guillaume; Souris, Kevin; Sterpin, Edmond

    2018-04-01

    Intensity-modulated proton therapy (IMPT) offers excellent dose conformity and healthy tissue sparing, but it can be substantially compromised in the presence of anatomical changes. A major dosimetric effect is caused by density changes, which alter the planned proton range in the patient. Three different methods, which automatically restore an IMPT plan dose on a daily CT image were implemented and compared: (1) simple dose restoration (DR) using optimization objectives of the initial plan, (2) voxel-wise dose restoration (vDR), and (3) isodose volume dose restoration (iDR). Dose restorations were calculated for three different clinical cases, selected to test different capabilities of the restoration methods: large range adaptation, complex dose distributions and robust re-optimization. All dose restorations were obtained in less than 5 min, without manual adjustments of the optimization settings. The evaluation of initial plans on repeated CTs showed large dose distortions, which were substantially reduced after restoration. In general, all dose restoration methods improved DVH-based scores in propagated target volumes and OARs. Analysis of local dose differences showed that, although all dose restorations performed similarly in high dose regions, iDR restored the initial dose with higher precision and accuracy in the whole patient anatomy. Median dose errors decreased from 13.55 Gy in distorted plan to 9.75 Gy (vDR), 6.2 Gy (DR) and 4.3 Gy (iDR). High quality dose restoration is essential to minimize or eventually by-pass the physician approval of the restored plan, as long as dose stability can be assumed. Motion (as well as setup and range uncertainties) can be taken into account by including robust optimization in the dose restoration. Restoring clinically-approved dose distribution on repeated CTs does not require new ROI segmentation and is compatible with an online adaptive workflow.

  11. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    Missing-data strategies have been developed to improve the noise-robustness of automatic speech recognition systems in adverse acoustic conditions. This is achieved by classifying time-frequency (T-F) units into reliable and unreliable components, as indicated by a so-called binary mask. Different...... approaches have been proposed to handle unreliable feature components, each with distinct advantages. The direct masking (DM) approach attenuates unreliable T-F units in the spectral domain, which allows the extraction of conventionally used mel-frequency cepstral coefficients (MFCCs). Instead of attenuating....... Since each of these approaches utilizes the knowledge about reliable and unreliable feature components in a different way, they will respond differently to estimation errors in the binary mask. The goal of this study was to identify the most effective strategy to exploit knowledge about reliable...

  12. Training shortest-path tractography: Automatic learning of spatial priors

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Liptrot, Matthew George; Reislev, Nina Linde

    2016-01-01

    Tractography is the standard tool for automatic delineation of white matter tracts from diffusion weighted images. However, the output of tractography often requires post-processing to remove false positives and ensure a robust delineation of the studied tract, and this demands expert prior...... knowledge. Here we demonstrate how such prior knowledge, or indeed any prior spatial information, can be automatically incorporated into a shortest-path tractography approach to produce more robust results. We describe how such a prior can be automatically generated (learned) from a population, and we...

  13. A Robust Parallel Algorithm for Combinatorial Compressed Sensing

    Science.gov (United States)

    Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian

    2018-04-01

    In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.

  14. Detection of heart beats in multimodal data: a robust beat-to-beat interval estimation approach.

    Science.gov (United States)

    Antink, Christoph Hoog; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    The heart rate and its variability play a vital role in the continuous monitoring of patients, especially in the critical care unit. They are commonly derived automatically from the electrocardiogram as the interval between consecutive heart beat. While their identification by QRS-complexes is straightforward under ideal conditions, the exact localization can be a challenging task if the signal is severely contaminated with noise and artifacts. At the same time, other signals directly related to cardiac activity are often available. In this multi-sensor scenario, methods of multimodal sensor-fusion allow the exploitation of redundancies to increase the accuracy and robustness of beat detection.In this paper, an algorithm for the robust detection of heart beats in multimodal data is presented. Classic peak-detection is augmented by robust multi-channel, multimodal interval estimation to eliminate false detections and insert missing beats. This approach yielded a score of 90.70 and was thus ranked third place in the PhysioNet/Computing in Cardiology Challenge 2014: Robust Detection of Heart Beats in Muthmodal Data follow-up analysis.In the future, the robust beat-to-beat interval estimator may directly be used for the automated processing of multimodal patient data for applications such as diagnosis support and intelligent alarming.

  15. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  16. Dynamic Optimization of Feedforward Automatic Gauge Control Based on Extended Kalman Filter

    Institute of Scientific and Technical Information of China (English)

    YANG Bin-hu; YANG Wei-dong; CHEN Lian-gui; QU Lei

    2008-01-01

    Automatic gauge control is an essentially nonlinear process varying with time delay, and stochastically varying input and process noise always influence the target gauge control accuracy. To improve the control capability of feedforward automatic gauge control, Kalman filter was employed to filter the noise signal transferred from one stand to another. The linearized matrix that the Kalman filter algorithm needed was concluded; thus, the feedforward automatic gauge control architecture was dynamically optimized. The theoretical analyses and simulation show that the proposed algorithm is reasonable and effective.

  17. Non-Stationary Rician Noise Estimation in Parallel MRI Using a Single Image: A Variance-Stabilizing Approach.

    Science.gov (United States)

    Pieciak, Tomasz; Aja-Fernandez, Santiago; Vegas-Sanchez-Ferrero, Gonzalo

    2017-10-01

    Parallel magnetic resonance imaging (pMRI) techniques have gained a great importance both in research and clinical communities recently since they considerably accelerate the image acquisition process. However, the image reconstruction algorithms needed to correct the subsampling artifacts affect the nature of noise, i.e., it becomes non-stationary. Some methods have been proposed in the literature dealing with the non-stationary noise in pMRI. However, their performance depends on information not usually available such as multiple acquisitions, receiver noise matrices, sensitivity coil profiles, reconstruction coefficients, or even biophysical models of the data. Besides, some methods show an undesirable granular pattern on the estimates as a side effect of local estimation. Finally, some methods make strong assumptions that just hold in the case of high signal-to-noise ratio (SNR), which limits their usability in real scenarios. We propose a new automatic noise estimation technique for non-stationary Rician noise that overcomes the aforementioned drawbacks. Its effectiveness is due to the derivation of a variance-stabilizing transformation designed to deal with any SNR. The method was compared to the main state-of-the-art methods in synthetic and real scenarios. Numerical results confirm the robustness of the method and its better performance for the whole range of SNRs.

  18. A low noise clock generator for high-resolution time-to-digital convertors

    International Nuclear Information System (INIS)

    Prinzie, J.; Leroux, P.; Christiaensen, J.; Moreira, P.; Steyaert, M.

    2016-01-01

    A robust PLL clock generator has been designed for the harsh environment in high-energy physics applications. The PLL operates with a reference clock frequency of 40 MHz to 50 MHz and performs a multiplication by 64. An LC tank VCO with low internal phase noise can generate a frequency from 2.2 GHz up to 3.2 GHz with internal discrete bank switching. The PLL includes an automatic bank selection algorithm to correctly select the correct range of the oscillator. The PLL has been fabricated in a 65 nm CMOS technology and consumes less than 30 mW. The additive jitter of the PLL has been measured to be less than 400 fs RMS

  19. FliPer: checking the reliability of global seismic parameters from automatic pipelines

    Science.gov (United States)

    Bugnet, L.; García, R. A.; Davies, G. R.; Mathur, S.; Corsaro, E.

    2017-12-01

    Our understanding of stars through asteroseismic data analysis is limited by our ability to take advantage of the huge amount of observed stars provided by space missions such as CoRoT, \\keplerp, \\ktop, and soon TESS and PLATO. Global seismic pipelines provide global stellar parameters such as mass and radius using the mean seismic parameters, as well as the effective temperature. These pipelines are commonly used automatically on thousands of stars observed by K2 for 3 months (and soon TESS for at least ˜ 1 month). However, pipelines are not immune from misidentifying noise peaks and stellar oscillations. Therefore, new validation techniques are required to assess the quality of these results. We present a new metric called FliPer (Flicker in Power), which takes into account the average variability at all measured time scales. The proper calibration of \\powvar enables us to obtain good estimations of global stellar parameters such as surface gravity that are robust against the influence of noise peaks and hence are an excellent way to find faults in asteroseismic pipelines.

  20. Robust indexing for automatic data collection

    International Nuclear Information System (INIS)

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-01-01

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation

  1. Making tensor factorizations robust to non-gaussian noise.

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Eric C. (Rice University, Houston, TX); Kolda, Tamara Gibson

    2011-03-01

    Tensors are multi-way arrays, and the CANDECOMP/PARAFAC (CP) tensor factorization has found application in many different domains. The CP model is typically fit using a least squares objective function, which is a maximum likelihood estimate under the assumption of independent and identically distributed (i.i.d.) Gaussian noise. We demonstrate that this loss function can be highly sensitive to non-Gaussian noise. Therefore, we propose a loss function based on the 1-norm because it can accommodate both Gaussian and grossly non-Gaussian perturbations. We also present an alternating majorization-minimization (MM) algorithm for fitting a CP model using our proposed loss function (CPAL1) and compare its performance to the workhorse algorithm for fitting CP models, CP alternating least squares (CPALS).

  2. Robustness of Linear Systems towards Multi-Dissipative Pertubations

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Poulsen, Niels Kjølstad

    1997-01-01

    We consider the question of robust stability of a linear time invariant plant subject to dynamic perturbations, which are dissipative in the sense of Willems with respect to several quadratic supply rates. For instance, parasitic dynamics are often both small gain and passive. We reduce several...... robustness analysis questions to linear matrix inequalities: robust stability, robust H2 performance and robust performance in presence of disturbances with finite signal-to-noise ratios...

  3. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  4. Towards social touch intelligence: developing a robust system for automatic touch recognition

    NARCIS (Netherlands)

    Jung, Merel Madeleine

    2014-01-01

    Touch behavior is of great importance during social interaction. Automatic recognition of social touch is necessary to transfer the touch modality from interpersonal interaction to other areas such as Human-Robot Interaction (HRI). This paper describes a PhD research program on the automatic

  5. Robust spike sorting of retinal ganglion cells tuned to spot stimuli.

    Science.gov (United States)

    Ghahari, Alireza; Badea, Tudor C

    2016-08-01

    We propose an automatic spike sorting approach for the data recorded from a microelectrode array during visual stimulation of wild type retinas with tiled spot stimuli. The approach first detects individual spikes per electrode by their signature local minima. With the mixture probability distribution of the local minima estimated afterwards, it applies a minimum-squared-error clustering algorithm to sort the spikes into different clusters. A template waveform for each cluster per electrode is defined, and a number of reliability tests are performed on it and its corresponding spikes. Finally, a divisive hierarchical clustering algorithm is used to deal with the correlated templates per cluster type across all the electrodes. According to the measures of performance of the spike sorting approach, it is robust even in the cases of recordings with low signal-to-noise ratio.

  6. Automatic detection of ECG electrode misplacement: a tale of two algorithms

    International Nuclear Information System (INIS)

    Xia, Henian; Garcia, Gabriel A; Zhao, Xiaopeng

    2012-01-01

    Artifacts in an electrocardiogram (ECG) due to electrode misplacement can lead to wrong diagnoses. Various computer methods have been developed for automatic detection of electrode misplacement. Here we reviewed and compared the performance of two algorithms with the highest accuracies on several databases from PhysioNet. These algorithms were implemented into four models. For clean ECG records with clearly distinguishable waves, the best model produced excellent accuracies (> = 98.4%) for all misplacements except the LA/LL interchange (87.4%). However, the accuracies were significantly lower for records with noise and arrhythmias. Moreover, when the algorithms were tested on a database that was independent from the training database, the accuracies may be poor. For the worst scenario, the best accuracies for different types of misplacements ranged from 36.1% to 78.4%. A large number of ECGs of various qualities and pathological conditions are collected every day. To improve the quality of health care, the results of this paper call for more robust and accurate algorithms for automatic detection of electrode misplacement, which should be developed and tested using a database of extensive ECG records. (paper)

  7. Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition.

    Science.gov (United States)

    Wang, Chenxing; Kemao, Qian; Da, Feipeng

    2017-10-02

    Fringe-based optical measurement techniques require reliable fringe analysis methods, where empirical mode decomposition (EMD) is an outstanding one due to its ability of analyzing complex signals and the merit of being data-driven. However, two challenging issues hinder the application of EMD in practical measurement. One is the tricky mode mixing problem (MMP), making the decomposed intrinsic mode functions (IMFs) have equivocal physical meaning; the other is the automatic and accurate extraction of the sinusoidal fringe from the IMFs when unpredictable and unavoidable background and noise exist in real measurements. Accordingly, in this paper, a novel bidimensional sinusoids-assisted EMD (BSEMD) is proposed to decompose a fringe pattern into mono-component bidimensional IMFs (BIMFs), with the MMP solved; properties of the resulted BIMFs are then analyzed to recognize and enhance the useful fringe component. The decomposition and the fringe recognition are integrated and the latter provides a feedback to the former, helping to automatically stop the decomposition to make the algorithm simpler and more reliable. A series of experiments show that the proposed method is accurate, efficient and robust to various fringe patterns even with poor quality, rendering it a potential tool for practical use.

  8. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    2008-03-01

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  9. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...

  10. Highly noise resistant multiqubit quantum correlations

    Science.gov (United States)

    Laskowski, Wiesław; Vértesi, Tamás; Wieśniak, Marcin

    2015-11-01

    We analyze robustness of correlations of the N-qubit GHZ and Dicke states against white noise admixture. For sufficiently large N, the Dicke states (for any number of excitations) lead to more robust violation of local realism than the GHZ states (e.g. for N > 8 for the W state). We also identify states that are the most resistant to white noise. Surprisingly, it turns out that these states are the GHZ states augmented with fully product states. Based on our numerical analysis conducted up to N = 8, and an analytical formula derived for any N parties, we conjecture that the three-qubit GHZ state augmented with a product of (N - 3) pure qubits is the most robust against white noise admixture among any N-qubit state. As a by-product, we derive a single Bell inequality and show that it is violated by all pure entangled states of a given number of parties. This gives an alternative proof of Gisin’s theorem.

  11. Highly noise resistant multiqubit quantum correlations

    International Nuclear Information System (INIS)

    Laskowski, Wiesław; Wieśniak, Marcin; Vértesi, Tamás

    2015-01-01

    We analyze robustness of correlations of the N-qubit GHZ and Dicke states against white noise admixture. For sufficiently large N, the Dicke states (for any number of excitations) lead to more robust violation of local realism than the GHZ states (e.g. for N > 8 for the W state). We also identify states that are the most resistant to white noise. Surprisingly, it turns out that these states are the GHZ states augmented with fully product states. Based on our numerical analysis conducted up to N = 8, and an analytical formula derived for any N parties, we conjecture that the three-qubit GHZ state augmented with a product of (N − 3) pure qubits is the most robust against white noise admixture among any N-qubit state. As a by-product, we derive a single Bell inequality and show that it is violated by all pure entangled states of a given number of parties. This gives an alternative proof of Gisin’s theorem. (paper)

  12. Speech production in amplitude-modulated noise

    DEFF Research Database (Denmark)

    Macdonald, Ewen N; Raufer, Stefan

    2013-01-01

    The Lombard effect refers to the phenomenon where talkers automatically increase their level of speech in a noisy environment. While many studies have characterized how the Lombard effect influences different measures of speech production (e.g., F0, spectral tilt, etc.), few have investigated...... the consequences of temporally fluctuating noise. In the present study, 20 talkers produced speech in a variety of noise conditions, including both steady-state and amplitude-modulated white noise. While listening to noise over headphones, talkers produced randomly generated five word sentences. Similar...... of noisy environments and will alter their speech accordingly....

  13. A Fast and Robust Method for Measuring Optical Channel Gain

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour; Stoustrup, Jakob; Villemoes, L.F.

    2000-01-01

    We present a numerically stable and computational simple method for fast and robust measurement of optical channel gain. By transmitting adaptively designed signals through the channel, good accuracy is possible even in severe noise conditions......We present a numerically stable and computational simple method for fast and robust measurement of optical channel gain. By transmitting adaptively designed signals through the channel, good accuracy is possible even in severe noise conditions...

  14. The performance of an automatic acoustic-based program classifier compared to hearing aid users' manual selection of listening programs.

    Science.gov (United States)

    Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias

    2018-03-01

    To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.

  15. Cortical activity patterns predict robust speech discrimination ability in noise

    Science.gov (United States)

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  16. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  17. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion.

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-29

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  18. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering

    Directory of Open Access Journals (Sweden)

    Oliynyk Andriy

    2012-08-01

    Full Text Available Abstract Background Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Results Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting, which is designed to optimize: (i fast and accurate detection, (ii offline sorting and (iii online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com using LabVIEW (National Instruments, USA. We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is

  19. Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering.

    Science.gov (United States)

    Oliynyk, Andriy; Bonifazzi, Claudio; Montani, Fernando; Fadiga, Luciano

    2012-08-08

    Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike

  20. Hybrid model decomposition of speech and noise in a radial basis function neural model framework

    DEFF Research Database (Denmark)

    Sørensen, Helge Bjarup Dissing; Hartmann, Uwe

    1994-01-01

    The aim of the paper is to focus on a new approach to automatic speech recognition in noisy environments where the noise has either stationary or non-stationary statistical characteristics. The aim is to perform automatic recognition of speech in the presence of additive car noise. The technique...

  1. A Robust Approach For Acoustic Noise Suppression In Speech Using ANFIS

    Science.gov (United States)

    Martinek, Radek; Kelnar, Michal; Vanus, Jan; Bilik, Petr; Zidek, Jan

    2015-11-01

    The authors of this article deals with the implementation of a combination of techniques of the fuzzy system and artificial intelligence in the application area of non-linear noise and interference suppression. This structure used is called an Adaptive Neuro Fuzzy Inference System (ANFIS). This system finds practical use mainly in audio telephone (mobile) communication in a noisy environment (transport, production halls, sports matches, etc). Experimental methods based on the two-input adaptive noise cancellation concept was clearly outlined. Within the experiments carried out, the authors created, based on the ANFIS structure, a comprehensive system for adaptive suppression of unwanted background interference that occurs in audio communication and degrades the audio signal. The system designed has been tested on real voice signals. This article presents the investigation and comparison amongst three distinct approaches to noise cancellation in speech; they are LMS (least mean squares) and RLS (recursive least squares) adaptive filtering and ANFIS. A careful review of literatures indicated the importance of non-linear adaptive algorithms over linear ones in noise cancellation. It was concluded that the ANFIS approach had the overall best performance as it efficiently cancelled noise even in highly noise-degraded speech. Results were drawn from the successful experimentation, subjective-based tests were used to analyse their comparative performance while objective tests were used to validate them. Implementation of algorithms was experimentally carried out in Matlab to justify the claims and determine their relative performances.

  2. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    Science.gov (United States)

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may

  3. Classification of mislabelled microarrays using robust sparse logistic regression.

    Science.gov (United States)

    Bootkrajang, Jakramate; Kabán, Ata

    2013-04-01

    Previous studies reported that labelling errors are not uncommon in microarray datasets. In such cases, the training set may become misleading, and the ability of classifiers to make reliable inferences from the data is compromised. Yet, few methods are currently available in the bioinformatics literature to deal with this problem. The few existing methods focus on data cleansing alone, without reference to classification, and their performance crucially depends on some tuning parameters. In this article, we develop a new method to detect mislabelled arrays simultaneously with learning a sparse logistic regression classifier. Our method may be seen as a label-noise robust extension of the well-known and successful Bayesian logistic regression classifier. To account for possible mislabelling, we formulate a label-flipping process as part of the classifier. The regularization parameter is automatically set using Bayesian regularization, which not only saves the computation time that cross-validation would take, but also eliminates any unwanted effects of label noise when setting the regularization parameter. Extensive experiments with both synthetic data and real microarray datasets demonstrate that our approach is able to counter the bad effects of labelling errors in terms of predictive performance, it is effective at identifying marker genes and simultaneously it detects mislabelled arrays to high accuracy. The code is available from http://cs.bham.ac.uk/∼jxb008. Supplementary data are available at Bioinformatics online.

  4. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  5. Generic and robust method for automatic segmentation of PET images using an active contour model

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Mingzan [Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen (Netherlands)

    2016-08-15

    Purpose: Although positron emission tomography (PET) images have shown potential to improve the accuracy of targeting in radiation therapy planning and assessment of response to treatment, the boundaries of tumors are not easily distinguishable from surrounding normal tissue owing to the low spatial resolution and inherent noisy characteristics of PET images. The objective of this study is to develop a generic and robust method for automatic delineation of tumor volumes using an active contour model and to evaluate its performance using phantom and clinical studies. Methods: MASAC, a method for automatic segmentation using an active contour model, incorporates the histogram fuzzy C-means clustering, and localized and textural information to constrain the active contour to detect boundaries in an accurate and robust manner. Moreover, the lattice Boltzmann method is used as an alternative approach for solving the level set equation to make it faster and suitable for parallel programming. Twenty simulated phantom studies and 16 clinical studies, including six cases of pharyngolaryngeal squamous cell carcinoma and ten cases of nonsmall cell lung cancer, were included to evaluate its performance. Besides, the proposed method was also compared with the contourlet-based active contour algorithm (CAC) and Schaefer’s thresholding method (ST). The relative volume error (RE), Dice similarity coefficient (DSC), and classification error (CE) metrics were used to analyze the results quantitatively. Results: For the simulated phantom studies (PSs), MASAC and CAC provide similar segmentations of the different lesions, while ST fails to achieve reliable results. For the clinical datasets (2 cases with connected high-uptake regions excluded) (CSs), CAC provides for the lowest mean RE (−8.38% ± 27.49%), while MASAC achieves the best mean DSC (0.71 ± 0.09) and mean CE (53.92% ± 12.65%), respectively. MASAC could reliably quantify different types of lesions assessed in this work

  6. Design Robust Controller for Rotary Kiln

    Directory of Open Access Journals (Sweden)

    Omar D. Hernández-Arboleda

    2013-11-01

    Full Text Available This paper presents the design of a robust controller for a rotary kiln. The designed controller is a combination of a fractional PID and linear quadratic regulator (LQR, these are not used to control the kiln until now, in addition robustness criteria are evaluated (gain margin, phase margin, strength gain, rejecting high frequency noise and sensitivity applied to the entire model (controller-plant, obtaining good results with a frequency range of 0.020 to 90 rad/s, which contributes to the robustness of the system.

  7. On the Interplay between Entropy and Robustness of Gene Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Bor-Sen Chen

    2010-05-01

    Full Text Available The interplay between entropy and robustness of gene network is a core mechanism of systems biology. The entropy is a measure of randomness or disorder of a physical system due to random parameter fluctuation and environmental noises in gene regulatory networks. The robustness of a gene regulatory network, which can be measured as the ability to tolerate the random parameter fluctuation and to attenuate the effect of environmental noise, will be discussed from the robust H∞ stabilization and filtering perspective. In this review, we will also discuss their balancing roles in evolution and potential applications in systems and synthetic biology.

  8. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    Science.gov (United States)

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  9. Robust Adaptive Speed Control of Induction Motor Drives

    DEFF Research Database (Denmark)

    Bidstrup, N.

    , (LS) identification and generalized predictive control (GPC) has been implemented and tested on the CVC drive. Allthough GPC is a robust control method, it was not possible to maintain specified controller performance in the entire operating range. This was the main reason for investigating truly...... adaptive speed control of the CVC drive. A direct truly adaptive speed controller has been implemented. The adaptive controller is a moving Average Self-Tuning Regulator which is abbreviated MASTR throughout the thesis. Two practical implementations of this controller were proposed. They were denoted MASTR...... and measurement noise in general, were the major reasons for the drifting parameters. Two approaches was proposed to robustify MASTR2 against the output noise. The first approach consists of filtering the output. Output filtering had a significant effect in simulations, but the robustness against the output noise...

  10. Robust Adaptive Speed Control of Induction Motor Drives

    DEFF Research Database (Denmark)

    Bidstrup, N.

    This thesis concerns speed control of current vector controlled induction motor drives (CVC drives). The CVC drive is an existing prototype drive developed by Danfoss A/S, Transmission Division. Practical tests have revealed that the open loop dynamical properties of the CVC drive are highly......, (LS) identification and generalized predictive control (GPC) has been implemented and tested on the CVC drive. Allthough GPC is a robust control method, it was not possible to maintain specified controller performance in the entire operating range. This was the main reason for investigating truly...... and measurement noise in general, were the major reasons for the drifting parameters. Two approaches was proposed to robustify MASTR2 against the output noise. The first approach consists of filtering the output. Output filtering had a significant effect in simulations, but the robustness against the output noise...

  11. Two-level Robust Measurement Fusion Kalman Filter for Clustering Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Peng; QI Wen-Juan; DENG Zi-Li

    2014-01-01

    This paper investigates the distributed fusion Kalman filtering over clustering sensor networks. The sensor network is partitioned as clusters by the nearest neighbor rule and each cluster consists of sensing nodes and cluster-head. Using the minimax robust estimation principle, based on the worst-case conservative system with the conservative upper bounds of noise variances, two-level robust measurement fusion Kalman filter is presented for the clustering sensor network systems with uncertain noise variances. It can significantly reduce the communication load and save energy when the number of sensors is very large. A Lyapunov equation approach for the robustness analysis is presented, by which the robustness of the local and fused Kalman filters is proved. The concept of the robust accuracy is presented, and the robust accuracy relations among the local and fused robust Kalman filters are proved. It is proved that the robust accuracy of the two-level weighted measurement fuser is equal to that of the global centralized robust fuser and is higher than those of each local robust filter and each local weighted measurement fuser. A simulation example shows the correctness and effectiveness of the proposed results.

  12. ROBUSTNESS OF A FACE-RECOGNITION TECHNIQUE BASED ON SUPPORT VECTOR MACHINES

    OpenAIRE

    Prashanth Harshangi; Koshy George

    2010-01-01

    The ever-increasing requirements of security concerns have placed a greater demand for face recognition surveillance systems. However, most current face recognition techniques are not quite robust with respect to factors such as variable illumination, facial expression and detail, and noise in images. In this paper, we demonstrate that face recognition using support vector machines are sufficiently robust to different kinds of noise, does not require image pre-processing, and can be used with...

  13. Robust Frame Synchronization for Low Signal-to-Noise Ratio Channels Using Energy-Corrected Differential Correlation

    Directory of Open Access Journals (Sweden)

    Kim Pansoo

    2009-01-01

    Full Text Available Recent standards for wireless transmission require reliable synchronization for channels with low signal-to-noise ratio (SNR as well as with a large amount of frequency offset, which necessitates a robust correlator structure for the initial frame synchronization process. In this paper, a new correlation strategy especially targeted for low SNR regions is proposed and its performance is analyzed. By utilizing a modified energy correction term, the proposed method effectively reduces the variance of the decision variable to enhance the detection performance. Most importantly, the method is demonstrated to outperform all previously reported schemes by a significant margin, for SNRs below 5 dB regardless of the existence of the frequency offsets. A variation of the proposed method is also presented for further enhancement over the channels with small frequency errors. The particular application considered for the performance verification is the second generation digital video broadcasting system for satellites (DVB-S2.

  14. Stochastic Mesocortical Dynamics and Robustness of Working Memory during Delay-Period.

    Directory of Open Access Journals (Sweden)

    Melissa Reneaux

    Full Text Available The role of prefronto-mesoprefrontal system in the dopaminergic modulation of working memory during delayed response tasks is well-known. Recently, a dynamical model of the closed-loop mesocortical circuit has been proposed which employs a deterministic framework to elucidate the system's behavior in a qualitative manner. Under natural conditions, noise emanating from various sources affects the circuit's functioning to a great extent. Accordingly in the present study, we reformulate the model into a stochastic framework and investigate its steady state properties in the presence of constant background noise during delay-period. From the steady state distribution, global potential landscape and signal-to-noise ratio are obtained which help in defining robustness of the circuit dynamics. This provides insight into the robustness of working memory during delay-period against its disruption due to background noise. The findings reveal that the global profile of circuit's robustness is predominantly governed by the level of D1 receptor activity and high D1 receptor stimulation favors the working memory-associated sustained-firing state over the spontaneous-activity state of the system. Moreover, the circuit's robustness is further fine-tuned by the levels of excitatory and inhibitory activities in a way such that the robustness of sustained-firing state exhibits an inverted-U shaped profile with respect to D1 receptor stimulation. It is predicted that the most robust working memory is formed possibly at a subtle ratio of the excitatory and inhibitory activities achieved at a critical level of D1 receptor stimulation. The study also paves a way to understand various cognitive deficits observed in old-age, acute stress and schizophrenia and suggests possible mechanistic routes to the working memory impairments based on the circuit's robustness profile.

  15. Stochastic Mesocortical Dynamics and Robustness of Working Memory during Delay-Period.

    Science.gov (United States)

    Reneaux, Melissa; Gupta, Rahul; Karmeshu

    2015-01-01

    The role of prefronto-mesoprefrontal system in the dopaminergic modulation of working memory during delayed response tasks is well-known. Recently, a dynamical model of the closed-loop mesocortical circuit has been proposed which employs a deterministic framework to elucidate the system's behavior in a qualitative manner. Under natural conditions, noise emanating from various sources affects the circuit's functioning to a great extent. Accordingly in the present study, we reformulate the model into a stochastic framework and investigate its steady state properties in the presence of constant background noise during delay-period. From the steady state distribution, global potential landscape and signal-to-noise ratio are obtained which help in defining robustness of the circuit dynamics. This provides insight into the robustness of working memory during delay-period against its disruption due to background noise. The findings reveal that the global profile of circuit's robustness is predominantly governed by the level of D1 receptor activity and high D1 receptor stimulation favors the working memory-associated sustained-firing state over the spontaneous-activity state of the system. Moreover, the circuit's robustness is further fine-tuned by the levels of excitatory and inhibitory activities in a way such that the robustness of sustained-firing state exhibits an inverted-U shaped profile with respect to D1 receptor stimulation. It is predicted that the most robust working memory is formed possibly at a subtle ratio of the excitatory and inhibitory activities achieved at a critical level of D1 receptor stimulation. The study also paves a way to understand various cognitive deficits observed in old-age, acute stress and schizophrenia and suggests possible mechanistic routes to the working memory impairments based on the circuit's robustness profile.

  16. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    International Nuclear Information System (INIS)

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-01-01

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  17. Automatic, non-intrusive, flame detection in pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, M.D.; Mehta, S.A.; Moore, R.G. [Calgary Univ., AB (Canada). Dept. of Chemical and Petroleum Engineering; Al-Himyary, T.J. [Al-Himyary Consulting Inc., Calgary, AB (Canada)

    2004-07-01

    Flames have been known to occur within small diameter pipes operating under conditions of high turbulent flow. Although there are several methods of flame detection, few offer remote, non-line-of-site detection. In particular, combustion cannot be detected in cases where flammable mixtures are carried in flare lines, storage tank vents, air drilling or improperly designed purging operations. Combustion noise is being examined as a means to address this problem. A study was conducted in which flames within a small diameter tube were automatically detected using high speed pressure measurements and a newly developed algorithm. Commercially available, high-pressure, dynamic-pressure transducers were used for the measurements. The results of an experimental study showed that combustion noise can be distinguished from other sources of noise by its inverse power law relationship with frequency. This paper presented a newly developed algorithm which provides early detection of flames when combined with high-speed pressure measurements. The algorithm can also separate combustion noise automatically from other sources of noise when combined with other filters. In this study, the noise generated by a fluttering check valve was attenuated using a stop band filter. This detection method was found to be very reliable under the conditions tests, as long as there was no flow restriction between the sensor and the flame. A flow restriction would have resulted in the detection of only the strongest flame noise. It was shown that acoustic flame detection can be applied successfully in flare stacks, industrial burners and turbine combustors. It can be 15 times more sensitive than optical or electrical methods in diagnosing combustion problems with lean burning combustors. It may also be the only method available in applications that require remote, non-line-of-sight detection. 11 refs., 3 tabs., 15 figs.

  18. Robust Fallback Scheme for the Danish Automatic Voltage Control System

    DEFF Research Database (Denmark)

    Qin, Nan; Dmitrova, Evgenia; Lund, Torsten

    2015-01-01

    This paper proposes a fallback scheme for the Danish automatic voltage control system. It will be activated in case of the local station loses telecommunication to the control center and/or the local station voltage violates the acceptable operational limits. It cuts in/out switchable and tap...... power system....

  19. Robust visual hashing via ICA

    International Nuclear Information System (INIS)

    Fournel, Thierry; Coltuc, Daniela

    2010-01-01

    Designed to maximize information transmission in the presence of noise, independent component analysis (ICA) could appear in certain circumstances as a statistics-based tool for robust visual hashing. Several ICA-based scenarios can attempt to reach this goal. A first one is here considered.

  20. Assuring robustness to noise in optimal quantum control experiments

    International Nuclear Information System (INIS)

    Bartelt, A.F.; Roth, M.; Mehendale, M.; Rabitz, H.

    2005-01-01

    Closed-loop optimal quantum control experiments operate in the inherent presence of laser noise. In many applications, attaining high quality results [i.e., a high signal-to-noise (S/N) ratio for the optimized objective] is as important as producing a high control yield. Enhancement of the S/N ratio will typically be in competition with the mean signal, however, the latter competition can be balanced by biasing the optimization experiments towards higher mean yields while retaining a good S/N ratio. Other strategies can also direct the optimization to reduce the standard deviation of the statistical signal distribution. The ability to enhance the S/N ratio through an optimized choice of the control is demonstrated for two condensed phase model systems: second harmonic generation in a nonlinear optical crystal and stimulated emission pumping in a dye solution

  1. Automatic document navigation for digital content remastering

    Science.gov (United States)

    Lin, Xiaofan; Simske, Steven J.

    2003-12-01

    This paper presents a novel method of automatically adding navigation capabilities to re-mastered electronic books. We first analyze the need for a generic and robust system to automatically construct navigation links into re-mastered books. We then introduce the core algorithm based on text matching for building the links. The proposed method utilizes the tree-structured dictionary and directional graph of the table of contents to efficiently conduct the text matching. Information fusion further increases the robustness of the algorithm. The experimental results on the MIT Press digital library project are discussed and the key functional features of the system are illustrated. We have also investigated how the quality of the OCR engine affects the linking algorithm. In addition, the analogy between this work and Web link mining has been pointed out.

  2. On the robustness of EC-PC spike detection method for online neural recording.

    Science.gov (United States)

    Zhou, Yin; Wu, Tong; Rastegarnia, Amir; Guan, Cuntai; Keefer, Edward; Yang, Zhi

    2014-09-30

    Online spike detection is an important step to compress neural data and perform real-time neural information decoding. An unsupervised, automatic, yet robust signal processing is strongly desired, thus it can support a wide range of applications. We have developed a novel spike detection algorithm called "exponential component-polynomial component" (EC-PC) spike detection. We firstly evaluate the robustness of the EC-PC spike detector under different firing rates and SNRs. Secondly, we show that the detection Precision can be quantitatively derived without requiring additional user input parameters. We have realized the algorithm (including training) into a 0.13 μm CMOS chip, where an unsupervised, nonparametric operation has been demonstrated. Both simulated data and real data are used to evaluate the method under different firing rates (FRs), SNRs. The results show that the EC-PC spike detector is the most robust in comparison with some popular detectors. Moreover, the EC-PC detector can track changes in the background noise due to the ability to re-estimate the neural data distribution. Both real and synthesized data have been used for testing the proposed algorithm in comparison with other methods, including the absolute thresholding detector (AT), median absolute deviation detector (MAD), nonlinear energy operator detector (NEO), and continuous wavelet detector (CWD). Comparative testing results reveals that the EP-PC detection algorithm performs better than the other algorithms regardless of recording conditions. The EC-PC spike detector can be considered as an unsupervised and robust online spike detection. It is also suitable for hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Acoustic ambient noise recorder

    Digital Repository Service at National Institute of Oceanography (India)

    Saran, A.K.; Navelkar, G.S.; Almeida, A.M.; More, S.R.; Chodankar, P.V.; Murty, C.S.

    with a robust outfit that can withstand high pressures and chemically corrosion resistant materials. Keeping these considerations in view, a CMOS micro-controller-based marine acoustic ambient noise recorder has been developed with a real time clock...

  4. Noise power spectrum of the fixed pattern noise in digital radiography detectors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Sik, E-mail: dskim@hufs.ac.kr [Department of Electronics Engineering, Hankuk University of Foreign Studies, Gyeonggi-do 449-791 (Korea, Republic of); Kim, Eun [R& D Center, DRTECH Co., Gyeonggi-do 13558 (Korea, Republic of)

    2016-06-15

    Purpose: The fixed pattern noise in radiography image detectors is caused by various sources. Multiple readout circuits with gate drivers and charge amplifiers are used to efficiently acquire the pixel voltage signals. However, the multiple circuits are not identical and thus yield nonuniform system gains. Nonuniform sensitivities are also produced from local variations in the charge collection elements. Furthermore, in phosphor-based detectors, the optical scattering at the top surface of the columnar CsI growth, the grain boundaries, and the disorder structure causes spatial sensitivity variations. These nonuniform gains or sensitivities cause fixed pattern noise and degrade the detector performance, even though the noise problem can be partially alleviated by using gain correction techniques. Hence, in order to develop good detectors, comparative analysis of the energy spectrum of the fixed pattern noise is important. Methods: In order to observe the energy spectrum of the fixed pattern noise, a normalized noise power spectrum (NNPS) of the fixed pattern noise is considered in this paper. Since the fixed pattern noise is mainly caused by the nonuniform gains, we call the spectrum the gain NNPS. We first asymptotically observe the gain NNPS and then formulate two relationships to calculate the gain NNPS based on a nonuniform-gain model. Since the gain NNPS values are quite low compared to the usual NNPS, measuring such a low NNPS value is difficult. By using the average of the uniform exposure images, a robust measuring method for the gain NNPS is proposed in this paper. Results: By using the proposed measuring method, the gain NNPS curves of several prototypes of general radiography and mammography detectors were measured to analyze their fixed pattern noise properties. We notice that a direct detector, which is based on the a-Se photoconductor, showed lower gain NNPS than the indirect-detector case, which is based on the CsI scintillator. By comparing the gain

  5. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    Science.gov (United States)

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  6. Adaptive robust Kalman filtering for precise point positioning

    International Nuclear Information System (INIS)

    Guo, Fei; Zhang, Xiaohong

    2014-01-01

    The optimality of precise point postioning (PPP) solution using a Kalman filter is closely connected to the quality of the a priori information about the process noise and the updated mesurement noise, which are sometimes difficult to obtain. Also, the estimation enviroment in the case of dynamic or kinematic applications is not always fixed but is subject to change. To overcome these problems, an adaptive robust Kalman filtering algorithm, the main feature of which introduces an equivalent covariance matrix to resist the unexpected outliers and an adaptive factor to balance the contribution of observational information and predicted information from the system dynamic model, is applied for PPP processing. The basic models of PPP including the observation model, dynamic model and stochastic model are provided first. Then an adaptive robust Kalmam filter is developed for PPP. Compared with the conventional robust estimator, only the observation with largest standardized residual will be operated by the IGG III function in each iteration to avoid reducing the contribution of the normal observations or even filter divergence. Finally, tests carried out in both static and kinematic modes have confirmed that the adaptive robust Kalman filter outperforms the classic Kalman filter by turning either the equivalent variance matrix or the adaptive factor or both of them. This becomes evident when analyzing the positioning errors in flight tests at the turns due to the target maneuvering and unknown process/measurement noises. (paper)

  7. Brain MR Image Restoration Using an Automatic Trilateral Filter With GPU-Based Acceleration.

    Science.gov (United States)

    Chang, Herng-Hua; Li, Cheng-Yuan; Gallogly, Audrey Haihong

    2018-02-01

    Noise reduction in brain magnetic resonance (MR) images has been a challenging and demanding task. This study develops a new trilateral filter that aims to achieve robust and efficient image restoration. Extended from the bilateral filter, the proposed algorithm contains one additional intensity similarity funct-ion, which compensates for the unique characteristics of noise in brain MR images. An entropy function adaptive to intensity variations is introduced to regulate the contributions of the weighting components. To hasten the computation, parallel computing based on the graphics processing unit (GPU) strategy is explored with emphasis on memory allocations and thread distributions. To automate the filtration, image texture feature analysis associated with machine learning is investigated. Among the 98 candidate features, the sequential forward floating selection scheme is employed to acquire the optimal texture features for regularization. Subsequently, a two-stage classifier that consists of support vector machines and artificial neural networks is established to predict the filter parameters for automation. A speedup gain of 757 was reached to process an entire MR image volume of 256 × 256 × 256 pixels, which completed within 0.5 s. Automatic restoration results revealed high accuracy with an ensemble average relative error of 0.53 ± 0.85% in terms of the peak signal-to-noise ratio. This self-regulating trilateral filter outperformed many state-of-the-art noise reduction methods both qualitatively and quantitatively. We believe that this new image restoration algorithm is of potential in many brain MR image processing applications that require expedition and automation.

  8. Seismic noise attenuation using an online subspace tracking algorithm

    Science.gov (United States)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  9. Robustness of Populations in Stochastic Environments

    DEFF Research Database (Denmark)

    Gießen, Christian; Kötzing, Timo

    2016-01-01

    We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the (1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend...... the abilities of the (1+1) EA. Larger population sizes are even more beneficial; we consider both parent and offspring populations. In this sense, populations are robust in these stochastic settings....

  10. Noise and vibration analysis system

    International Nuclear Information System (INIS)

    Johnsen, J.R.; Williams, R.L.

    1985-01-01

    The analysis of noise and vibration data from an operating nuclear plant can provide valuable information that can identify and characterize abnormal conditions. Existing plant monitoring equipment, such as loose parts monitoring systems (LPMS) and neutron flux detectors, may be capable of gathering noise data, but may lack the analytical capability to extract useful meanings hidden in the noise. By analyzing neutron noise signals, the structural motion and integrity of core components can be assessed. Computer analysis makes trending of frequency spectra within a fuel cycle and from one cycle to another a practical means of core internals monitoring. The Babcock and Wilcox Noise and Vibration Analysis System (NVAS) is a powerful, compact system that can automatically perform complex data analysis. The system can acquire, process, and store data, then produce report-quality plots of the important parameter. Software to perform neutron noise analysis and loose parts analysis operates on the same hardware package. Since the system is compact, inexpensive, and easy to operate, it allows utilities to perform more frequency analyses without incurring high costs and provides immediate results

  11. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  12. A robust nonlinear filter for image restoration.

    Science.gov (United States)

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  13. A simple procedure to estimate reactivity with good noise filtering characteristics

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2014-01-01

    Highlights: • A new and simple on-line reactivity estimation method is proposed. • The estimator has robust noise filtering characteristics. • The noise filtering is equivalent to those of conventional reactivity meters. • The new estimator eliminates the burden of selecting optimum filter constants. • The new estimation performance is assessed without and with measurement noise. - Abstract: A new and simple on-line reactivity estimation method is proposed. The estimator has robust noise filtering characteristics without the use of complex filters. The noise filtering capability is equivalent to or better than that of a conventional estimator based on Inverse Point Kinetics (IPK). The new estimator can also eliminate the burden of selecting optimum filter time constants, such as would be required for the IPK-based estimator, or noise covariance matrices, which are needed if the extended Kalman filter (EKF) technique is used. In this paper, the new estimation method is introduced and its performance assessed without and with measurement noise

  14. Robust signal extraction for on-line monitoring data

    NARCIS (Netherlands)

    Davies, P.L.; Fried, R.; Gather, U.

    2004-01-01

    Data from the automatic monitoring of intensive care patients exhibits trends, outliers, and level changes as well as periods of relative constancy. All this is overlaid with a high level of noise and there are dependencies between the different items measured. Current monitoring systems tend to

  15. Evaluation of automatic image quality assessment in chest CT - A human cadaver study.

    Science.gov (United States)

    Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus

    2017-04-01

    The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Phase noise of dispersion-managed solitons

    International Nuclear Information System (INIS)

    Spiller, Elaine T.; Biondini, Gino

    2009-01-01

    We quantify noise-induced phase deviations of dispersion-managed solitons (DMS) in optical fiber communications and femtosecond lasers. We first develop a perturbation theory for the dispersion-managed nonlinear Schroedinger equation (DMNLSE) in order to compute the noise-induced mean and variance of the soliton parameters. We then use the analytical results to guide importance-sampled Monte Carlo simulations of the noise-driven DMNLSE. Comparison of these results with those from the original unaveraged governing equations confirms the validity of the DMNLSE as a model for many dispersion-managed systems and quantify the increased robustness of DMS with respect to noise-induced phase jitter.

  17. Incremental Activation Detection for Real-Time fMRI Series Using Robust Kalman Filter

    Directory of Open Access Journals (Sweden)

    Liang Li

    2014-01-01

    Full Text Available Real-time functional magnetic resonance imaging (rt-fMRI is a technique that enables us to observe human brain activations in real time. However, some unexpected noises that emerged in fMRI data collecting, such as acute swallowing, head moving and human manipulations, will cause much confusion and unrobustness for the activation analysis. In this paper, a new activation detection method for rt-fMRI data is proposed based on robust Kalman filter. The idea is to add a variation to the extended kalman filter to handle the additional sparse measurement noise and a sparse noise term to the measurement update step. Hence, the robust Kalman filter is designed to improve the robustness for the outliers and can be computed separately for each voxel. The algorithm can compute activation maps on each scan within a repetition time, which meets the requirement for real-time analysis. Experimental results show that this new algorithm can bring out high performance in robustness and in real-time activation detection.

  18. Noise equalization for detection of microcalcification clusters in direct digital mammogram images.

    NARCIS (Netherlands)

    McLoughlin, K.J.; Bones, P.J.; Karssemeijer, N.

    2004-01-01

    Equalizing image noise is shown to be an important step in the automatic detection of microcalcifications in digital mammography. This study extends a well established film-screen noise equalization scheme developed by Veldkamp et al. for application to full-field digital mammogram (FFDM) images. A

  19. Track filtering by robust neural network

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Kisel', I.V.; Konotopskaya, E.V.; Ososkov, G.A.

    1993-01-01

    In the present paper we study the following problems of track information extraction by the artificial neural network (ANN) rotor model: providing initial ANN configuration by an algorithm general enough to be applicable for any discrete detector in- or out of a magnetic field; robustness to heavy contaminated raw data (up to 100% signal-to-noise ratio); stability to the growing event multiplicity. These problems were carried out by corresponding innovations of our model, namely: by a special one-dimensional histogramming, by multiplying weights by a specially designed robust multiplier, and by replacing the simulated annealing schedule by ANN dynamics with an optimally fixed temperature. Our approach is valid for both circular and straight (non-magnetic) tracks and tested on 2D simulated data contaminated by 100% noise points distributed uniformly. To be closer to some reality in our simulation, we keep parameters of the cylindrical spectrometer ARES. 12 refs.; 9 figs

  20. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    Science.gov (United States)

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R

  1. Noise-dependent optimal strategies for quantum metrology

    Science.gov (United States)

    Huang, Zixin; Macchiavello, Chiara; Maccone, Lorenzo

    2018-03-01

    For phase estimation using qubits, we show that for some noise channels, the optimal entanglement-assisted strategy depends on the noise level. We note that there is a nontrivial crossover between the parallel-entangled strategy and the ancilla-assisted strategy: in the former the probes are all entangled; in the latter the probes are entangled with a noiseless ancilla but not among themselves. The transition can be explained by the fact that separable states are more robust against noise and therefore are optimal in the high-noise limit, but they are in turn outperformed by ancilla-assisted ones.

  2. Robust electrocardiogram (ECG) beat classification using discrete wavelet transform

    International Nuclear Information System (INIS)

    Minhas, Fayyaz-ul-Amir Afsar; Arif, Muhammad

    2008-01-01

    This paper presents a robust technique for the classification of six types of heartbeats through an electrocardiogram (ECG). Features extracted from the QRS complex of the ECG using a wavelet transform along with the instantaneous RR-interval are used for beat classification. The wavelet transform utilized for feature extraction in this paper can also be employed for QRS delineation, leading to reduction in overall system complexity as no separate feature extraction stage would be required in the practical implementation of the system. Only 11 features are used for beat classification with the classification accuracy of ∼99.5% through a KNN classifier. Another main advantage of this method is its robustness to noise, which is illustrated in this paper through experimental results. Furthermore, principal component analysis (PCA) has been used for feature reduction, which reduces the number of features from 11 to 6 while retaining the high beat classification accuracy. Due to reduction in computational complexity (using six features, the time required is ∼4 ms per beat), a simple classifier and noise robustness (at 10 dB signal-to-noise ratio, accuracy is 95%), this method offers substantial advantages over previous techniques for implementation in a practical ECG analyzer

  3. Robust non-local median filter

    Science.gov (United States)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2017-04-01

    This paper describes a novel image filter with superior performance on detail-preserving removal of random-valued impulse noise superimposed on natural gray-scale images. The non-local means filter is in the limelight as a way of Gaussian noise removal with superior performance on detail preservation. By referring the fundamental concept of the non-local means, we had proposed a non-local median filter as a specialized way for random-valued impulse noise removal so far. In the non-local processing, the output of a filter is calculated from pixels in blocks which are similar to the block centered at a pixel of interest. As a result, aggressive noise removal is conducted without destroying the detailed structures in an original image. However, the performance of non-local processing decreases enormously in the case of high noise occurrence probability. A cause of this problem is that the superimposed noise disturbs accurate calculation of the similarity between the blocks. To cope with this problem, we propose an improved non-local median filter which is robust to the high level of corruption by introducing a new similarity measure considering possibility of being the original signal. The effectiveness and validity of the proposed method are verified in a series of experiments using natural gray-scale images.

  4. Automatic delineation of functional volumes in emission tomography for oncology applications

    International Nuclear Information System (INIS)

    Hatt, M.

    2008-12-01

    One of the main factors of error for semi-quantitative analysis in positron emission tomography (PET) imaging for diagnosis and patient follow up, as well as new flourishing applications like image guided radiotherapy, is the methodology used to define the volumes of interest in the functional images. This is explained by poor image quality in emission tomography resulting from noise and partial volume effects induced blurring, as well as the variability of acquisition protocols, scanner models and image reconstruction procedures. The large number of proposed methodologies for the definition of a PET volume of interest does not help either. The majority of such proposed approaches are based on deterministic binary thresholding that are not robust to contrast variation and noise. In addition, these methodologies are usually unable to correctly handle heterogeneous uptake inside tumours. The objective of this thesis is to develop an automatic, robust, accurate and reproducible 3D image segmentation approach for the functional volumes determination of tumours of all sizes and shapes, and whose activity distribution may be strongly heterogeneous. The approach we have developed is based on a statistical image segmentation framework, combined with a fuzzy measure, which allows to take into account both noisy and blurry properties of nuclear medicine images. It uses a stochastic iterative parameters estimation and a locally adaptive model of the voxel and its neighbours for the estimation and segmentation. The developed approaches have been evaluated using a large array of datasets, comprising both simulated and real acquisitions of phantoms and tumours. The results obtained on phantom acquisitions allowed to validate the accuracy of the segmentation with respect to the size of considered structures, down to 13 mm in diameter (about twice the spatial resolution of a typical PET scanner), as well as its robustness with respect to noise, contrast variation, acquisition

  5. Nonlinear Image Restoration in Confocal Microscopy : Stability under Noise

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.

    1995-01-01

    In this paper we study the noise stability of iterative algorithms developed for attenuation correction in Fluorescence Confocal Microscopy using FT methods. In each iteration the convolution of the previous estimate is computed. It turns out that the estimators are robust to noise perturbation.

  6. Automatic detection and visualisation of MEG ripple oscillations in epilepsy

    Directory of Open Access Journals (Sweden)

    Nicole van Klink

    2017-01-01

    Full Text Available High frequency oscillations (HFOs, 80–500 Hz in invasive EEG are a biomarker for the epileptic focus. Ripples (80–250 Hz have also been identified in non-invasive MEG, yet detection is impeded by noise, their low occurrence rates, and the workload of visual analysis. We propose a method that identifies ripples in MEG through noise reduction, beamforming and automatic detection with minimal user effort. We analysed 15 min of presurgical resting-state interictal MEG data of 25 patients with epilepsy. The MEG signal-to-noise was improved by using a cross-validation signal space separation method, and by calculating ~2400 beamformer-based virtual sensors in the grey matter. Ripples in these sensors were automatically detected by an algorithm optimized for MEG. A small subset of the identified ripples was visually checked. Ripple locations were compared with MEG spike dipole locations and the resection area if available. Running the automatic detection algorithm resulted in on average 905 ripples per patient, of which on average 148 ripples were visually reviewed. Reviewing took approximately 5 min per patient, and identified ripples in 16 out of 25 patients. In 14 patients the ripple locations showed good or moderate concordance with the MEG spikes. For six out of eight patients who had surgery, the ripple locations showed concordance with the resection area: 4/5 with good outcome and 2/3 with poor outcome. Automatic ripple detection in beamformer-based virtual sensors is a feasible non-invasive tool for the identification of ripples in MEG. Our method requires minimal user effort and is easily applicable in a clinical setting.

  7. Noise-driven manifestation of learning in mature neural networks

    International Nuclear Information System (INIS)

    Monterola, Christopher; Saloma, Caesar

    2002-01-01

    We show that the generalization capability of a mature thresholding neural network to process above-threshold disturbances in a noise-free environment is extended to subthreshold disturbances by ambient noise without retraining. The ability to benefit from noise is intrinsic and does not have to be learned separately. Nonlinear dependence of sensitivity with noise strength is significantly narrower than in individual threshold systems. Noise has a minimal effect on network performance for above-threshold signals. We resolve two seemingly contradictory responses of trained networks to noise--their ability to benefit from its presence and their robustness against noisy strong disturbances

  8. Pavement noise measurements in Poland

    Science.gov (United States)

    Zofka, Ewa; Zofka, Adam; Mechowski, Tomasz

    2017-09-01

    The objective of this study is to investigate the feasibility of the On-Board Sound Intensity (OBSI) system to measure tire-pavement noise in Poland. In general, sources of noise emitted by the modern vehicles are the propulsion noise, aerodynamic resistance and noise generated at the tire-pavement interface. In order to capture tire-pavement noise, the OBSI system uses a noise intensity probe installed in the close proximity of that interface. In this study, OBSI measurements were performed at different types of pavement surfaces such as stone mastic asphalt (SMA), regular asphalt concrete (HMA) as well as Portland cement concrete (PCC). The influence of several necessary OBSI measurement conditions were recognized as: testing speed, air temperature, tire pressure and tire type. The results of this study demonstrate that the OBSI system is a viable and robust tool that can be used for the quality evaluation of newly built asphalt pavements in Poland. It can be also applied to generate reliable input parameters for the noise propagation models that are used to assess the environmental impact of new and existing highway corridors.

  9. An automatic classifier of emotions built from entropy of noise.

    Science.gov (United States)

    Ferreira, Jacqueline; Brás, Susana; Silva, Carlos F; Soares, Sandra C

    2017-04-01

    The electrocardiogram (ECG) signal has been widely used to study the physiological substrates of emotion. However, searching for better filtering techniques in order to obtain a signal with better quality and with the maximum relevant information remains an important issue for researchers in this field. Signal processing is largely performed for ECG analysis and interpretation, but this process can be susceptible to error in the delineation phase. In addition, it can lead to the loss of important information that is usually considered as noise and, consequently, discarded from the analysis. The goal of this study was to evaluate if the ECG noise allows for the classification of emotions, while using its entropy as an input in a decision tree classifier. We collected the ECG signal from 25 healthy participants while they were presented with videos eliciting negative (fear and disgust) and neutral emotions. The results indicated that the neutral condition showed a perfect identification (100%), whereas the classification of negative emotions indicated good identification performances (60% of sensitivity and 80% of specificity). These results suggest that the entropy of noise contains relevant information that can be useful to improve the analysis of the physiological correlates of emotion. © 2016 Society for Psychophysiological Research.

  10. A fast and robust method for automated analysis of axonal transport.

    Science.gov (United States)

    Welzel, Oliver; Knörr, Jutta; Stroebel, Armin M; Kornhuber, Johannes; Groemer, Teja W

    2011-09-01

    Cargo movement along axons and dendrites is indispensable for the survival and maintenance of neuronal networks. Key parameters of this transport such as particle velocities and pausing times are often studied using kymograph construction, which converts the transport along a line of interest from a time-lapse movie into a position versus time image. Here we present a method for the automatic analysis of such kymographs based on the Hough transform, which is a robust and fast technique to extract lines from images. The applicability of the method was tested on simulated kymograph images and real data from axonal transport of synaptophysin and tetanus toxin as well as the velocity analysis of synaptic vesicle sharing between adjacent synapses in hippocampal neurons. Efficiency analysis revealed that the algorithm is able to detect a wide range of velocities and can be used at low signal-to-noise ratios. The present work enables the quantification of axonal transport parameters with high throughput with no a priori assumptions and minimal human intervention.

  11. Advancing Noise Robust Automatic Speech Recognition for Command and Control Applications

    National Research Council Canada - National Science Library

    Bass, James D

    2006-01-01

    .... The reliable elimination of the keyboard and mouse in mounted and un-mounted C2 systems has been a desire of systems developers and requirements writers since the development of PC-based ASR systems in the early 1990...

  12. Robust X-band LNAs in AlGaN/GaN technology

    NARCIS (Netherlands)

    Janssen, J.P.B.; Heijningen, M. van; Visser, G.C.; Rodenburg, M.; Johnson, H.K.; Uren, M.J.; Morvan, E.; Vliet, F.E. van

    2009-01-01

    Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realise robust receiver components. This paper presents the design, realisation and measurement of two robust AlGaN/GaN low noise amplifiers. The two versions have been

  13. Robust X-band LNAs in AlGaN/GaN technology

    NARCIS (Netherlands)

    Janssen, J.P.B.; van Heiningen, M.; Visser, G.C.; Rodenburg, M.; Johnson, H.K.; Uren, M.J.; Morvan, E.; van Vliet, Frank Edward

    2009-01-01

    Abstract Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realise robust receiver components. This paper presents the design, realisation and measurement of two robust AlGaN/GaN low noise amplifiers. The two versions have

  14. Automatic Estimation of Movement Statistics of People

    DEFF Research Database (Denmark)

    Ægidiussen Jensen, Thomas; Rasmussen, Henrik Anker; Moeslund, Thomas B.

    2012-01-01

    Automatic analysis of how people move about in a particular environment has a number of potential applications. However, no system has so far been able to do detection and tracking robustly. Instead, trajectories are often broken into tracklets. The key idea behind this paper is based around...

  15. Measurements of kinetic parameters by noise techniques on the MINERVE reactor

    International Nuclear Information System (INIS)

    Carre, J.C.; Da Costa Oliveira, J.

    1975-01-01

    Noise measurements were determined on ERMINE a fast thermal coupled reactor built in MINERVE. A reactor without feedback, and a reactor with an automatic control rod were both considered. The first case concerned the measurements of auto and cross power spectral density obtained with one or two neutron detectors, and the determination of: neutron lifetime; efficiency for one ion chamber; power level of the reactor; maximal speed and acceleration of the control rod for the design of an automatic reactor control actuator. The second case was concerned with measurements of the auto power spectral density in reactivity for the control rod, and the estimation of: the transfer function of the automatic pilot; the neutron lifetime; and the standard error affecting the results obtained by the oscillation method. The results proved that the pile noise theory with a point kinetic model is sufficient for application on zero power reactors. (U.K.)

  16. Robustness: confronting lessons from physics and biology.

    Science.gov (United States)

    Lesne, Annick

    2008-11-01

    The term robustness is encountered in very different scientific fields, from engineering and control theory to dynamical systems to biology. The main question addressed herein is whether the notion of robustness and its correlates (stability, resilience, self-organisation) developed in physics are relevant to biology, or whether specific extensions and novel frameworks are required to account for the robustness properties of living systems. To clarify this issue, the different meanings covered by this unique term are discussed; it is argued that they crucially depend on the kind of perturbations that a robust system should by definition withstand. Possible mechanisms underlying robust behaviours are examined, either encountered in all natural systems (symmetries, conservation laws, dynamic stability) or specific to biological systems (feedbacks and regulatory networks). Special attention is devoted to the (sometimes counterintuitive) interrelations between robustness and noise. A distinction between dynamic selection and natural selection in the establishment of a robust behaviour is underlined. It is finally argued that nested notions of robustness, relevant to different time scales and different levels of organisation, allow one to reconcile the seemingly contradictory requirements for robustness and adaptability in living systems.

  17. Robustness of raw quantum tomography

    Science.gov (United States)

    Asorey, M.; Facchi, P.; Florio, G.; Man'ko, V. I.; Marmo, G.; Pascazio, S.; Sudarshan, E. C. G.

    2011-01-01

    We scrutinize the effects of non-ideal data acquisition on the tomograms of quantum states. The presence of a weight function, schematizing the effects of a finite window or equivalently noise, only affects the state reconstruction procedure by a normalization constant. The results are extended to a discrete mesh and show that quantum tomography is robust under incomplete and approximate knowledge of tomograms.

  18. Robustness of raw quantum tomography

    Energy Technology Data Exchange (ETDEWEB)

    Asorey, M. [Departamento de Fisica Teorica, Facultad de Ciencias, Universidad de Zaragoza, 50009 Zaragoza (Spain); Facchi, P. [Dipartimento di Matematica, Universita di Bari, I-70125 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Florio, G. [Dipartimento di Fisica, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Man' ko, V.I., E-mail: manko@lebedev.r [P.N. Lebedev Physical Institute, Leninskii Prospect 53, Moscow 119991 (Russian Federation); Marmo, G. [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , I-80126 Napoli (Italy); INFN, Sezione di Napoli, I-80126 Napoli (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Pascazio, S. [Dipartimento di Fisica, Universita di Bari, I-70126 Bari (Italy); INFN, Sezione di Bari, I-70126 Bari (Italy); MECENAS, Universita Federico II di Napoli and Universita di Bari (Italy); Sudarshan, E.C.G. [Department of Physics, University of Texas, Austin, TX 78712 (United States)

    2011-01-31

    We scrutinize the effects of non-ideal data acquisition on the tomograms of quantum states. The presence of a weight function, schematizing the effects of a finite window or equivalently noise, only affects the state reconstruction procedure by a normalization constant. The results are extended to a discrete mesh and show that quantum tomography is robust under incomplete and approximate knowledge of tomograms.

  19. Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.

    Science.gov (United States)

    Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter

    2017-09-01

    Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Automatic radiation dose monitoring for CT of trauma patients with different protocols: feasibility and accuracy

    International Nuclear Information System (INIS)

    Higashigaito, K.; Becker, A.S.; Sprengel, K.; Simmen, H.-P.; Wanner, G.; Alkadhi, H.

    2016-01-01

    Aim: To demonstrate the feasibility and accuracy of automatic radiation dose monitoring software for computed tomography (CT) of trauma patients in a clinical setting over time, and to evaluate the potential of radiation dose reduction using iterative reconstruction (IR). Materials and methods: In a time period of 18 months, data from 378 consecutive thoraco-abdominal CT examinations of trauma patients were extracted using automatic radiation dose monitoring software, and patients were split into three cohorts: cohort 1, 64-section CT with filtered back projection, 200 mAs tube current–time product; cohort 2, 128-section CT with IR and identical imaging protocol; cohort 3, 128-section CT with IR, 150 mAs tube current–time product. Radiation dose parameters from the software were compared with the individual patient protocols. Image noise was measured and image quality was semi-quantitatively determined. Results: Automatic extraction of radiation dose metrics was feasible and accurate in all (100%) patients. All CT examinations were of diagnostic quality. There were no differences between cohorts 1 and 2 regarding volume CT dose index (CTDI_v_o_l; p=0.62), dose–length product (DLP), and effective dose (ED, both p=0.95), while noise was significantly lower (chest and abdomen, both −38%, p<0.017). Compared to cohort 1, CTDI_v_o_l, DLP, and ED in cohort 3 were significantly lower (all −25%, p<0.017), similar to the noise in the chest (–32%) and abdomen (–27%, both p<0.017). Compared to cohort 2, CTDI_v_o_l (–28%), DLP, and ED (both –26%) in cohort 3 was significantly lower (all, p<0.017), while noise in the chest (+9%) and abdomen (+18%) was significantly higher (all, p<0.017). Conclusion: Automatic radiation dose monitoring software is feasible and accurate, and can be implemented in a clinical setting for evaluating the effects of lowering radiation doses of CT protocols over time. - Highlights: • Automatic dose monitoring software can be

  1. Robust-mode analysis of hydrodynamic flows

    Science.gov (United States)

    Roy, Sukesh; Gord, James R.; Hua, Jia-Chen; Gunaratne, Gemunu H.

    2017-04-01

    The emergence of techniques to extract high-frequency high-resolution data introduces a new avenue for modal decomposition to assess the underlying dynamics, especially of complex flows. However, this task requires the differentiation of robust, repeatable flow constituents from noise and other irregular features of a flow. Traditional approaches involving low-pass filtering and principle components analysis have shortcomings. The approach outlined here, referred to as robust-mode analysis, is based on Koopman decomposition. Three applications to (a) a counter-rotating cellular flame state, (b) variations in financial markets, and (c) turbulent injector flows are provided.

  2. Robust Short-Lag Spatial Coherence Imaging.

    Science.gov (United States)

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  3. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    Science.gov (United States)

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  4. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Science.gov (United States)

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  5. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Directory of Open Access Journals (Sweden)

    R Channing Moore

    Full Text Available Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  6. A computationally simple and robust method to detect determinism in a time series

    DEFF Research Database (Denmark)

    Lu, Sheng; Ju, Ki Hwan; Kanters, Jørgen K.

    2006-01-01

    We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals. The IS ......We present a new, simple, and fast computational technique, termed the incremental slope (IS), that can accurately distinguish between deterministic from stochastic systems even when the variance of noise is as large or greater than the signal, and remains robust for time-varying signals...

  7. Robust linear registration of CT images using random regression forests

    Science.gov (United States)

    Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan

    2011-03-01

    Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

  8. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  9. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  10. Individual differences in sound-in-noise perception are related to the strength of short-latency neural responses to noise.

    Directory of Open Access Journals (Sweden)

    Ekaterina Vinnik

    2011-02-01

    Full Text Available Important sounds can be easily missed or misidentified in the presence of extraneous noise. We describe an auditory illusion in which a continuous ongoing tone becomes inaudible during a brief, non-masking noise burst more than one octave away, which is unexpected given the frequency resolution of human hearing. Participants strongly susceptible to this illusory discontinuity did not perceive illusory auditory continuity (in which a sound subjectively continues during a burst of masking noise when the noises were short, yet did so at longer noise durations. Participants who were not prone to illusory discontinuity showed robust early electroencephalographic responses at 40-66 ms after noise burst onset, whereas those prone to the illusion lacked these early responses. These data suggest that short-latency neural responses to auditory scene components reflect subsequent individual differences in the parsing of auditory scenes.

  11. Experimental noise-resistant Bell-inequality violations for polarization-entangled photons

    International Nuclear Information System (INIS)

    Bovino, Fabio A.; Castagnoli, Giuseppe; Cabello, Adan; Lamas-Linares, Antia

    2006-01-01

    We experimentally demonstrate that violations of Bell's inequalities for two-photon polarization-entangled states with colored noise are extremely robust, whereas this is not the case for states with white noise. Controlling the amount of noise by using the timing compensation scheme introduced by Kim et al. [Phys. Rev. A 67, 010301(R) (2003)], we have observed violations even for states with very high noise, in excellent agrement with the predictions of Cabello et al. [Phys. Rev. A 72, 052112 (2005)

  12. Deliberation versus automaticity in decision making: Which presentation format features facilitate automatic decision making?

    Directory of Open Access Journals (Sweden)

    Anke Soellner

    2013-05-01

    Full Text Available The idea of automatic decision making approximating normatively optimal decisions without necessitating much cognitive effort is intriguing. Whereas recent findings support the notion that such fast, automatic processes explain empirical data well, little is known about the conditions under which such processes are selected rather than more deliberate stepwise strategies. We investigate the role of the format of information presentation, focusing explicitly on the ease of information acquisition and its influence on information integration processes. In a probabilistic inference task, the standard matrix employed in prior research was contrasted with a newly created map presentation format and additional variations of both presentation formats. Across three experiments, a robust presentation format effect emerged: Automatic decision making was more prevalent in the matrix (with high information accessibility, whereas sequential decision strategies prevailed when the presentation format demanded more information acquisition effort. Further scrutiny of the effect showed that it is not driven by the presentation format as such, but rather by the extent of information search induced by a format. Thus, if information is accessible with minimal need for information search, information integration is likely to proceed in a perception-like, holistic manner. In turn, a moderate demand for information search decreases the likelihood of behavior consistent with the assumptions of automatic decision making.

  13. The impact of auditory white noise on semantic priming.

    Science.gov (United States)

    Angwin, Anthony J; Wilson, Wayne J; Copland, David A; Barry, Robert J; Myatt, Grace; Arnott, Wendy L

    2018-04-10

    It has been proposed that white noise can improve cognitive performance for some individuals, particularly those with lower attention, and that this effect may be mediated by dopaminergic circuitry. Given existing evidence that semantic priming is modulated by dopamine, this study investigated whether white noise can facilitate semantic priming. Seventy-eight adults completed an auditory semantic priming task with and without white noise, at either a short or long inter-stimulus interval (ISI). Measures of both direct and indirect semantic priming were examined. Analysis of the results revealed significant direct and indirect priming effects at each ISI in noise and silence, however noise significantly reduced the magnitude of indirect priming. Analyses of subgroups with higher versus lower attention revealed a reduction to indirect priming in noise relative to silence for participants with lower executive and orienting attention. These findings suggest that white noise focuses automatic spreading activation, which may be driven by modulation of dopaminergic circuitry. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Noise level and MPEG-2 encoder statistics

    Science.gov (United States)

    Lee, Jungwoo

    1997-01-01

    Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.

  15. Synthesis of digital locomotive receiver of automatic locomotive signaling

    Directory of Open Access Journals (Sweden)

    K. V. Goncharov

    2013-02-01

    Full Text Available Purpose. Automatic locomotive signaling of continuous type with a numeric coding (ALSN has several disadvantages: a small number of signal indications, low noise stability, high inertia and low functional flexibility. Search for new and more advanced methods of signal processing for automatic locomotive signaling, synthesis of the noise proof digital locomotive receiver are essential. Methodology. The proposed algorithm of detection and identification locomotive signaling codes is based on the definition of mutual correlations of received oscillation and reference signals. For selecting threshold levels of decision element the following criterion has been formulated: the locomotive receiver should maximum set the correct solution for a given probability of dangerous errors. Findings. It has been found that the random nature of the ALSN signal amplitude does not affect the detection algorithm. However, the distribution law and numeric characteristics of signal amplitude affect the probability of errors, and should be considered when selecting a threshold levels According to obtained algorithm of detection and identification ALSN signals the digital locomotive receiver has been synthesized. It contains band pass filter, peak limiter, normalizing amplifier with automatic gain control circuit, analog to digital converter and digital signal processor. Originality. The ALSN system is improved by the way of the transfer of technical means to modern microelectronic element base, more perfect methods of detection and identification codes of locomotive signaling are applied. Practical value. Use of digital technology in the construction of the locomotive receiver ALSN will expand its functionality, will increase the noise immunity and operation stability of the locomotive signal system in conditions of various destabilizing factors.

  16. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    Science.gov (United States)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long

  17. A Robust FLOM Based Spectrum Sensing Scheme under Middleton Class A Noise in IoT

    Directory of Open Access Journals (Sweden)

    Enwei Xu

    2017-01-01

    Full Text Available Accessibility to remote users in dynamic environment, high spectrum utilization, and no spectrum purchase make Cognitive Radio (CR a feasible solution of wireless communications in the Internet of Things (IoT. Reliable spectrum sensing becomes the prerequisite for the establishment of communication between IoT-capable objects. Considering the application environment, spectrum sensing not only has to cope with man-made impulsive noises but also needs to overcome noise fluctuations. In this paper, we study the Fractional Lower Order Moments (FLOM based spectrum sensing method under Middleton Class A noise and incorporate a Noise Power Estimation (NPE module into the sensing system to deal with the issue of noise uncertainty. Moreover, the NPE process does not need noise-only samples. The analytical expressions of the probabilities of detection and the probability of false alarm are derived. The impact on sensing performance of the parameters of the NPE module is also analyzed. The theoretical analysis and simulation results show that our proposed sensing method achieves a satisfactory performance at low SNR.

  18. Effect of contrast material on image noise and radiation dose in adult chest computed tomography using automatic exposure control: A comparative study between 16-, 64- and 128-slice CT

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Jijo, E-mail: jijopaul1980@gmail.com [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Haus 23C UG, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Goethe University, Department of Biophysics, Max von Laue-Strasse 1, 60438 Frankfurt am Main (Germany); Schell, Boris, E-mail: boris.schell@googlemail.com [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Haus 23C UG, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Kerl, J. Matthias, E-mail: matthias.kerl@gmai.com [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Haus 23C UG, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Maentele, Werner, E-mail: maentele@biophysik.uni-frankfurt.de [Goethe University, Department of Biophysics, Max von Laue-Strasse 1, 60438 Frankfurt am Main (Germany); Vogl, Thomas J., E-mail: t.vogl@em.uni-frankfurt.de [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Haus 23C UG, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Bauer, Ralf W., E-mail: ralfwbauer@aol.com [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Haus 23C UG, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany)

    2011-08-15

    Purpose: To determine the difference in radiation dose between non-enhanced (NECT) and contrast-enhanced (CECT) chest CT examinations contributed by contrast material with different scanner generations with automatic exposure control (AEC). Methods and materials: Each 42 adult patients received a NECT and CECT of the chest in one session on a 16-, 64- or 128-slice CT scanner with the same scan protocol settings. However, AEC technology (Care Dose 4D, Siemens) underwent upgrades in each of the three scanner generations. DLP, CTDIvol and image noise were compared. Results: Although absolute differences in image noise were very small and ranged between 10 and 13 HU for NECT and CECT in median, the differences in image noise and dose (DLP: 16-slice:+2.8%; 64-slice:+3.9%; 128-slice:+5.6%) between NECT and CECT were statistically significant in all groups. Image noise and dose parameters were significantly lower in the most recent 128-slice CT generation for both NECT and CECT (DLP: 16-slice:+35.5-39.2%; 64-slice:+6.8-8.5%). Conclusion: The presence of contrast material lead to an increase in dose for chest examinations in three CT generations with AEC. Although image noise values were significantly higher for CECT, the absolute differences were in a range of 3 HU. This can be regarded as negligible, thus indicating that AEC is able to fulfill its purpose of maintaining image quality. However, technological developments lead to a significant reduction of dose and image noise with the latest CT generation.

  19. Noise data management using commercially available data-base software

    International Nuclear Information System (INIS)

    Damiano, B.; Thie, J.A.

    1988-01-01

    A data base has been created using commercially available software to manage the data collected by an automated noise data acquisition system operated by Oak Ridge National Laboratory at the Fast Flux Test Facility (FFTF). The data base was created to store, organize, and retrieve selected features of the nuclear and process signal noise data, because the large volume of data collected by the automated system makes manual data handling and interpretation based on visual examination of noise signatures impractical. Compared with manual data handling, use of the data base allows the automatically collected data to be utilized more fully and effectively. The FFTF noise data base uses the Oracle Relational Data Base Management System implemented on a desktop personal computer

  20. Objective and subjective rating of tonal noise radiated from UK wind farms: Pt. 2

    International Nuclear Information System (INIS)

    1996-01-01

    This final report provides data on the assessment of tonal noise radiation from wind turbines in the United Kingdom. Both objective and subjective assessments of the noise pollution from various wind farms are incorporated in the study. Previous subjective tests are verified here using a larger subject and sample size compared to the initial study. The study also aims to produce an objective automatic tonal assessment procedure which identifies tones and broad band masking noise in wind farm radiated noise spectra. (UK)

  1. Process and device for automatically surveying complex installations

    International Nuclear Information System (INIS)

    Pekrul, P.J.; Thiele, A.W.

    1976-01-01

    A description is given of a process for automatically analysing separate signal processing channels in real time, one channel per signal, in a facility with significant background noise in signals varying in time and coming from transducers at selected points for the continuous monitoring of the operating conditions of the various components of the installation. The signals are intended to determine potential breakdowns, determine conclusions as to the severity of these potential breakdowns and indicate to an operator the measures to be taken in consequence. The feature of this process is that it comprises the automatic and successive selection of each channel for the purpose of spectral analysis, the automatic processing of the signal of each selected channel to show energy spectrum density data at pre-determined frequencies, the automatic comparison of the energy spectrum density data of each channel with pre-determined sets of limits varying with the frequency, and the automatic indication to the operator of the condition of the various components of the installation associated to each channel and the measures to be taken depending on the set of limits [fr

  2. Noise-driven phenomena in hysteretic systems

    CERN Document Server

    Dimian, Mihai

    2014-01-01

    Noise-Driven Phenomena in Hysteretic Systems provides a general approach to nonlinear systems with hysteresis driven by noisy inputs, which leads to a unitary framework for the analysis of various stochastic aspects of hysteresis. This book includes integral, differential and algebraic models that are used to describe scalar and vector hysteretic nonlinearities originating from various areas of science and engineering. The universality of the authors approach is also reflected by the diversity of the models used to portray the input noise, from the classical Gaussian white noise to its impulsive forms, often encountered in economics and biological systems, and pink noise, ubiquitous in multi-stable electronic systems. The book is accompanied by HysterSoft© - a robust simulation environment designed to perform complex hysteresis modeling – that can be used by the reader to reproduce many of the results presented in the book as well as to research both disruptive and constructive effects of noise in hysteret...

  3. Evaluation of Robust Estimators Applied to Fluorescence Assays

    Directory of Open Access Journals (Sweden)

    U. Ruotsalainen

    2007-12-01

    Full Text Available We evaluated standard robust methods in the estimation of fluorescence signal in novel assays used for determining the biomolecule concentrations. The objective was to obtain an accurate and reliable estimate using as few observations as possible by decreasing the influence of outliers. We assumed the true signals to have Gaussian distribution, while no assumptions about the outliers were made. The experimental results showed that arithmetic mean performs poorly even with the modest deviations. Further, the robust methods, especially the M-estimators, performed extremely well. The results proved that the use of robust methods is advantageous in the estimation problems where noise and deviations are significant, such as in biological and medical applications.

  4. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  5. Automaticity of walking: functional significance, mechanisms, measurement and rehabilitation strategies

    Directory of Open Access Journals (Sweden)

    David J Clark

    2015-05-01

    Full Text Available Automaticity is a hallmark feature of walking in adults who are healthy and well-functioning. In the context of walking, ‘automaticity’ refers to the ability of the nervous system to successfully control typical steady state walking with minimal use of attention-demanding executive control resources. Converging lines of evidence indicate that walking deficits and disorders are characterized in part by a shift in the locomotor control strategy from healthy automaticity to compensatory executive control. This is potentially detrimental to walking performance, as an executive control strategy is not optimized for locomotor control. Furthermore, it places excessive demands on a limited pool of executive reserves. The result is compromised ability to perform basic and complex walking tasks and heightened risk for adverse mobility outcomes including falls. Strategies for rehabilitation of automaticity are not well defined, which is due to both a lack of systematic research into the causes of impaired automaticity and to a lack of robust neurophysiological assessments by which to gauge automaticity. These gaps in knowledge are concerning given the serious functional implications of compromised automaticity. Therefore, the objective of this article is to advance the science of automaticity of walking by consolidating evidence and identifying gaps in knowledge regarding: a functional significance of automaticity; b neurophysiology of automaticity; c measurement of automaticity; d mechanistic factors that compromise automaticity; and e strategies for rehabilitation of automaticity.

  6. A robust controller design method for feedback substitution schemes using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Mirsha M; Hadjiloucas, Sillas; Becerra, Victor M, E-mail: s.hadjiloucas@reading.ac.uk [Cybernetics, School of Systems Engineering, University of Reading, RG6 6AY (United Kingdom)

    2011-08-17

    Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.

  7. A Comparison of seismic instrument noise coherence analysis techniques

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.

    2011-01-01

    The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.

  8. A Robust Feedforward Model of the Olfactory System.

    Directory of Open Access Journals (Sweden)

    Yilun Zhang

    2016-04-01

    Full Text Available Most natural odors have sparse molecular composition. This makes the principles of compressed sensing potentially relevant to the structure of the olfactory code. Yet, the largely feedforward organization of the olfactory system precludes reconstruction using standard compressed sensing algorithms. To resolve this problem, recent theoretical work has shown that signal reconstruction could take place as a result of a low dimensional dynamical system converging to one of its attractor states. However, the dynamical aspects of optimization slowed down odor recognition and were also found to be susceptible to noise. Here we describe a feedforward model of the olfactory system that achieves both strong compression and fast reconstruction that is also robust to noise. A key feature of the proposed model is a specific relationship between how odors are represented at the glomeruli stage, which corresponds to a compression, and the connections from glomeruli to third-order neurons (neurons in the olfactory cortex of vertebrates or Kenyon cells in the mushroom body of insects, which in the model corresponds to reconstruction. We show that should this specific relationship hold true, the reconstruction will be both fast and robust to noise, and in particular to the false activation of glomeruli. The predicted connectivity rate from glomeruli to third-order neurons can be tested experimentally.

  9. dc SQUID electronics based on adaptive noise cancellation and a high open-loop gain controller

    International Nuclear Information System (INIS)

    Seppae, H.

    1992-01-01

    A low-noise SQUID readout electronics with a high slew rate and an automatic gain control feature has been developed. Flux noise levels of 5x10 -7 Φ 0 /√Hz at 1 kHz and 2x10 -6 Φ 0 /√Hz at 1 Hz have been measured with this readout scheme. The system tolerates sinusoidal disturbances having amplitudes up to 140 Φ 0 at 1 kHz without loosing lock. The electronics utilizes a cooled GaAs FET to control the cancellation of the voltage noise of the room temperature amplifier, a PI 3/2 controller to provide a high open-loop gain at low frequencies, and a square-wave flux and offset voltage modulation to enable automatic control of the noise reduction. The cutoff frequency of the flux-locked-loop is 300 kHz and the feedback gain is more than 130 dB at 10 Hz. (orig.)

  10. An automatic microseismic or acoustic emission arrival identification scheme with deep recurrent neural networks

    Science.gov (United States)

    Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi

    2018-02-01

    The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.

  11. Modeling stochasticity and robustness in gene regulatory networks.

    Science.gov (United States)

    Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis

    2009-06-15

    Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.

  12. Traffic Noise Assessment at Residential Areas in Skudai, Johor

    Science.gov (United States)

    Sulaiman, F. S.; Darus, N.; Mashros, N.; Haron, Z.; Yahya, K.

    2018-03-01

    Vehicles passing by on roadways in residential areas may produce unpleasant traffic noise that affects the residents. This paper presents the traffic noise assessment of three selected residential areas located in Skudai, Johor. The objectives of this study are to evaluate traffic characteristics at selected residential areas, determine related noise indices, and assess impact of traffic noise. Traffic characteristics such as daily traffic volume and vehicle speed were evaluated using automatic traffic counter (ATC). Meanwhile, noise indices like equivalent continuous sound pressure level (LAeq), noise level exceeded 10% (L10) and 90% (L90) of measurement time were determined using sound level meter (SLM). Besides that, traffic noise index (TNI) and noise pollution level (LNP) were calculated based on the measured noise indices. The results showed an increase in noise level of 60 to 70 dBA maximum due to increase in traffic volume. There was also a significant change in noise level of more than 70 dBA even though average vehicle speed did not vary significantly. Nevertheless, LAeq, TNI, and LNP values for all sites during daytime were lower than the maximum recommended levels. Thus, residents in the three studied areas were not affected in terms of quality of life and health.

  13. Description of Anomalous Noise Events for Reliable Dynamic Traffic Noise Mapping in Real-Life Urban and Suburban Soundscapes

    Directory of Open Access Journals (Sweden)

    Francesc Alías

    2017-02-01

    Full Text Available Traffic noise is one of the main pollutants in urban and suburban areas. European authorities have driven several initiatives to study, prevent and reduce the effects of exposure of population to traffic. Recent technological advances have allowed the dynamic computation of noise levels by means of Wireless Acoustic Sensor Networks (WASN such as that developed within the European LIFE DYNAMAP project. Those WASN should be capable of detecting and discarding non-desired sound sources from road traffic noise, denoted as anomalous noise events (ANE, in order to generate reliable noise level maps. Due to the local, occasional and diverse nature of ANE, some works have opted to artificially build ANE databases at the cost of misrepresentation. This work presents the production and analysis of a real-life environmental audio database in two urban and suburban areas specifically conceived for anomalous noise events’ collection. A total of 9 h 8 min of labelled audio data is obtained differentiating among road traffic noise, background city noise and ANE. After delimiting their boundaries manually, the acoustic salience of the ANE samples is automatically computed as a contextual signal-to-noise ratio (SNR. The analysis of the real-life environmental database shows high diversity of ANEs in terms of occurrences, durations and SNRs, as well as confirming both the expected differences between the urban and suburban soundscapes in terms of occurrences and SNRs, and the rare nature of ANE.

  14. Automatic quantitative renal scintigraphy

    International Nuclear Information System (INIS)

    Valeyre, J.; Deltour, G.; Delisle, M.J.; Bouchard, A.

    1976-01-01

    Renal scintigraphy data may be analyzed automatically by the use of a processing system coupled to an Anger camera (TRIDAC-MULTI 8 or CINE 200). The computing sequence is as follows: normalization of the images; background noise subtraction on both images; evaluation of mercury 197 uptake by the liver and spleen; calculation of the activity fractions on each kidney with respect to the injected dose, taking into account the kidney depth and the results referred to normal values; edition of the results. Automation minimizes the scattering parameters and by its simplification is a great asset in routine work [fr

  15. Data preprocessing methods for robust Fourier ptychographic microscopy

    Science.gov (United States)

    Zhang, Yan; Pan, An; Lei, Ming; Yao, Baoli

    2017-12-01

    Fourier ptychographic microscopy (FPM) is a recently developed computational imaging technique that achieves gigapixel images with both high resolution and large field-of-view. In the current FPM experimental setup, the dark-field images with high-angle illuminations are easily overwhelmed by stray lights and background noises due to the low signal-to-noise ratio, thus significantly degrading the achievable resolution of the FPM approach. We provide an overall and systematic data preprocessing scheme to enhance the FPM's performance, which involves sampling analysis, underexposed/overexposed treatments, background noises suppression, and stray lights elimination. It is demonstrated experimentally with both US Air Force (USAF) 1951 resolution target and biological samples that the benefit of the noise removal by these methods far outweighs the defect of the accompanying signal loss, as part of the lost signals can be compensated by the improved consistencies among the captured raw images. In addition, the reported nonparametric scheme could be further cooperated with the existing state-of-the-art algorithms with a great flexibility, facilitating a stronger noise-robust capability of the FPM approach in various applications.

  16. Comparison radiation dose of Z-axis automatic tube current modulation technique with fixed tube current multi-detector row CT scanning of lower extremity venography

    International Nuclear Information System (INIS)

    Yoo, Beong Gyu; Kweon, Dae Cheol; Lee, Jong Seok; Jang, Keun Jo; Jeon, Sang Hwan; Kim, Yong Soo

    2007-01-01

    Z-axis automatic tube current modulation technique automatically adjusts tube current based on size of body region scanned. The purpose of the current study was to compare noise, and radiation dose of Multi-Detector row CT (MDCT) of lower extremity performed with Z-axis modulation technique of automatic tube current modulation with manual selection fixed tube current. Fifty consecutive underwent MDCT venography of lower extremity with use of a MDCT scanner fixed tube current and Z-axis automatic tube current modulation technique (10, 11 and 12 HU noise index, 70∼450 mA). Scanning parameters included 120 kVp, 0.5 second gantry rotation time, 1.35:1 beam pitch, and 1 mm reconstructed section thickness. For each subject, images obtained with Z-axis modulation were compared with previous images obtained with fixed tube current (200, 250, 300 mA) and with other parameters identical. Images were compared for noise at five levels: iliac, femoral, popliteal, tibial, and peroneal vein of lower extremity. Tube current and gantry rotation time used for acquisitions at these levels were recorded. All CT examinations of study and control groups were diagnostically acceptable, though objective noise was significantly more with Z-axis automatic tube current modulation. Compared with fixed tube current, Z-axis modulation resulted in reduction of CTDIvol (range, -6.5%∼-35.6%) and DLP (range,-0.2%∼-20.2%). Compared with manually selected fixed tube current, Z-axis automatic tube current modulation resulted in reduced radiation dose at MDCT of lower extremity venography

  17. Time-Distance Helioseismology: Noise Estimation

    Science.gov (United States)

    Gizon, L.; Birch, A. C.

    2004-10-01

    As in global helioseismology, the dominant source of noise in time-distance helioseismology measurements is realization noise due to the stochastic nature of the excitation mechanism of solar oscillations. Characterizing noise is important for the interpretation and inversion of time-distance measurements. In this paper we introduce a robust definition of travel time that can be applied to very noisy data. We then derive a simple model for the full covariance matrix of the travel-time measurements. This model depends only on the expectation value of the filtered power spectrum and assumes that solar oscillations are stationary and homogeneous on the solar surface. The validity of the model is confirmed through comparison with SOHO MDI measurements in a quiet-Sun region. We show that the correlation length of the noise in the travel times is about half the dominant wavelength of the filtered power spectrum. We also show that the signal-to-noise ratio in quiet-Sun travel-time maps increases roughly as the square root of the observation time and is at maximum for a distance near half the length scale of supergranulation.

  18. Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain

    Science.gov (United States)

    Yin, Xingyao; Li, Kun; Zong, Zhaoyun

    2016-10-01

    AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.

  19. Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation

    OpenAIRE

    Pelapur, Rengarajan; Prasath, Surya; Palaniappan, Kannappan

    2014-01-01

    We are building a computerized image analysis system for Dura Mater vascular network from fluorescence microscopy images. We propose a system that couples a multi-focus image fusion module with a robust adaptive filtering based segmentation. The robust adaptive filtering scheme handles noise without destroying small structures, and the multi focal image fusion considerably improves the overall segmentation quality by integrating information from multiple images. Based on the segmenta...

  20. Allegro: noise performance and the ongoing search for gravitational waves

    International Nuclear Information System (INIS)

    Heng, I S; Daw, E; Giaime, J; Hamilton, W O; Mchugh, M P; Johnson, W W

    2002-01-01

    The noise performance of Allegro since 1993 is summarized. We show that the noise level of Allegro is, in general, stationary. Non-Gaussian impulse excitations persist despite efforts to isolate the detector from environmental disturbances. Some excitations are caused by seismic activity and flux jumps in the SQUID. Algorithms to identify and automatically veto these events are presented. Also, the contribution of Allegro to collaborations with other resonant-mass detectors via the International Gravitational Event Collaboration and with LIGO is reviewed

  1. Allegro: noise performance and the ongoing search for gravitational waves

    CERN Document Server

    Heng, I S; Giaime, J; Hamilton, W O; McHugh, M P; Johnson, W W

    2002-01-01

    The noise performance of Allegro since 1993 is summarized. We show that the noise level of Allegro is, in general, stationary. Non-Gaussian impulse excitations persist despite efforts to isolate the detector from environmental disturbances. Some excitations are caused by seismic activity and flux jumps in the SQUID. Algorithms to identify and automatically veto these events are presented. Also, the contribution of Allegro to collaborations with other resonant-mass detectors via the International Gravitational Event Collaboration and with LIGO is reviewed.

  2. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  3. WHO Environmental Noise Guidelines for the European Region: A Systematic Review on Environmental Noise and Cognition.

    Science.gov (United States)

    Clark, Charlotte; Paunovic, Katarina

    2018-02-07

    This systematic review assesses the quality of the evidence across individual studies on the effect of environmental noise (road traffic, aircraft, and train and railway noise) on cognition. Quantitative non-experimental studies of the association between environmental noise exposure on child and adult cognitive performance published up to June 2015 were reviewed: no limit was placed on the start date for the search. A total of 34 papers were identified, all of which were of child populations. 82% of the papers were of cross-sectional design, with fewer studies of longitudinal or intervention design. A range of cognitive outcomes were examined. The quality of the evidence across the studies for each individual noise source and cognitive outcome was assessed using an adaptation of GRADE methodology. This review found, given the predominance of cross-sectional studies, that the quality of the evidence across studies ranged from being of moderate quality for an effect for some outcomes, e.g., aircraft noise effects on reading comprehension and on long-term memory, to no effect for other outcomes such as attention and executive function and for some noise sources such as road traffic noise and railway noise. The GRADE evaluation of low quality evidence across studies for some cognitive domains and for some noise sources does not necessarily mean that there are no effects: rather, that more robust and a greater number of studies are required.

  4. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  5. Physics- and engineering knowledge-based geometry repair system for robust parametric CAD geometries

    OpenAIRE

    Li, Dong

    2012-01-01

    In modern multi-objective design optimisation, an effective geometry engine is becoming an essential tool and its performance has a significant impact on the entire process. Building a parametric geometry requires difficult compromises between the conflicting goals of robustness and flexibility. The work presents a solution for improving the robustness of parametric geometry models by capturing and modelling relative engineering knowledge into a surrogate model, and deploying it automatically...

  6. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  7. Genetic noise control via protein oligomerization

    Energy Technology Data Exchange (ETDEWEB)

    Ghim, C; Almaas, E

    2008-06-12

    Gene expression in a cell entails random reaction events occurring over disparate time scales. Thus, molecular noise that often results in phenotypic and population-dynamic consequences sets a fundamental limit to biochemical signaling. While there have been numerous studies correlating the architecture of cellular reaction networks with noise tolerance, only a limited effort has been made to understand the dynamical role of protein-protein associations. We have developed a fully stochastic model for the positive feedback control of a single gene, as well as a pair of genes (toggle switch), integrating quantitative results from previous in vivo and in vitro studies. In particular, we explicitly account for the fast protein binding-unbinding kinetics, RNA polymerases, and the promoter/operator sequences of DNA. We find that the overall noise-level is reduced and the frequency content of the noise is dramatically shifted to the physiologically irrelevant high-frequency regime in the presence of protein dimerization. This is independent of the choice of monomer or dimer as transcription factor and persists throughout the multiple model topologies considered. For the toggle switch, we additionally find that the presence of a protein dimer, either homodimer or heterodimer, may significantly reduce its intrinsic switching rate. Hence, the dimer promotes the robust function of bistable switches by preventing the uninduced (induced) state from randomly being induced (uninduced). The specific binding between regulatory proteins provides a buffer that may prevent the propagation of fluctuations in genetic activity. The capacity of the buffer is a non-monotonic function of association-dissociation rates. Since the protein oligomerization per se does not require extra protein components to be expressed, it provides a basis for the rapid control of intrinsic or extrinsic noise. The stabilization of phenotypically important toggle switches, and nested positive feedback loops in

  8. Synchronization of uncoupled excitable systems induced by white and coloured noise

    International Nuclear Information System (INIS)

    Zambrano, Samuel; Marino, Ines P; Seoane, Jesus M; Sanjuan, Miguel A F; Euzzor, Stefano; Geltrude, Andrea; Meucci, Riccardo; Arecchi, Fortunato T

    2010-01-01

    We study, both numerically and experimentally, the synchronization of uncoupled excitable systems due to a common noise. We consider two identical FitzHugh-Nagumo systems, which display both spiking and non-spiking behaviours in chaotic or periodic regimes. An electronic circuit provides a laboratory implementation of these dynamics. Synchronization is tested with both white and coloured noise, showing that coloured noise is more effective in inducing synchronization of the systems. We also study the effects on the synchronization of parameter mismatch and of the presence of intrinsic (not common) noise, and we conclude that the best performance of coloured noise is robust under these distortions.

  9. Identification of conductive hearing loss using air conduction tests alone: reliability and validity of an automatic test battery.

    Science.gov (United States)

    Convery, Elizabeth; Keidser, Gitte; Seeto, Mark; Freeston, Katrina; Zhou, Dan; Dillon, Harvey

    2014-01-01

    The primary objective of this study was to determine whether a combination of automatically administered pure-tone audiometry and a tone-in-noise detection task, both delivered via an air conduction (AC) pathway, could reliably and validly predict the presence of a conductive component to the hearing loss. The authors hypothesized that performance on the battery of tests would vary according to hearing loss type. A secondary objective was to evaluate the reliability and validity of a novel automatic audiometry algorithm to assess its suitability for inclusion in the test battery. Participants underwent a series of hearing assessments that were conducted in a randomized order: manual pure-tone air conduction audiometry and bone conduction audiometry; automatic pure-tone air conduction audiometry; and an automatic tone-in-noise detection task. The automatic tests were each administered twice. The ability of the automatic test battery to: (a) predict the presence of an air-bone gap (ABG); and (b) accurately measure AC hearing thresholds was assessed against the results of manual audiometry. Test-retest conditions were compared to determine the reliability of each component of the automatic test battery. Data were collected on 120 ears from normal-hearing and conductive, sensorineural, and mixed hearing-loss subgroups. Performance differences between different types of hearing loss were observed. Ears with a conductive component (conductive and mixed ears) tended to have normal signal to noise ratios (SNR) despite impaired thresholds in quiet, while ears without a conductive component (normal and sensorineural ears) demonstrated, on average, an increasing relationship between their thresholds in quiet and their achieved SNR. Using the relationship between these two measures among ears with no conductive component as a benchmark, the likelihood that an ear has a conductive component can be estimated based on the deviation from this benchmark. The sensitivity and

  10. Towards a fully automatic and robust DIMM (DIMMA)

    International Nuclear Information System (INIS)

    Varela, A M; Muñoz-Tuñón, C; Del Olmo-García, A M; Rodríguez, L F; Delgado, J M; Castro-Almazán, J A

    2015-01-01

    Quantitative seeing measurements have been provided at the Canarian Observatories since 1990 by differential image motion monitors (DIMMs). Image quality needs to be studied in long term (routine) measurements. This is important, for instance, in deciding on the siting of large telescopes or in the development of adaptive optics programmes, not to mention the development and design of new instruments. On the other hand, the continuous real time monitoring is essential in the day-to-day operation of telescopes.These routine measurements have to be carried out by standard, easy-to-operate and cross- calibrated instruments that required to be be operational with minimum intervention over many years. The DIMMA (Automatic Differential Image Motion Monitor) is the next step, a fully automated seeing monitor that is capable of providing data without manual operation and in remote locations. Currently, the IAC has two DIMMs working at Roque de los Muchachos Observatory (ORM) and Teide Observatory (OT). They are robotic and require an operator to start and initialize the program, focus the telescope, change the star when needed and turn off at the end of the night, all of which is done remotely. With a view to automation, we have designed a code for monitoring image quality (avoiding spurious data) and a program for autofocus, which is presented here. The data quality control protocol is also given. (paper)

  11. STUDY OF AUTOMATIC IMAGE RECTIFICATION AND REGISTRATION OF SCANNED HISTORICAL AERIAL PHOTOGRAPHS

    Directory of Open Access Journals (Sweden)

    H. R. Chen

    2016-06-01

    Full Text Available Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  12. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  13. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  14. A molecular noise generator

    International Nuclear Information System (INIS)

    Lu Ting; Ferry, Michael; Hasty, Jeff; Weiss, Ron

    2008-01-01

    Recent studies have demonstrated that intracellular variations in the rate of gene expression are of fundamental importance to cellular function and development. While such 'noise' is often considered detrimental in the context of perturbing genetic systems, it can be beneficial in processes such as species diversification and facilitation of evolution. A major difficulty in exploring such effects is that the magnitude and spectral properties of the induced variations arise from some intrinsic cellular process that is difficult to manipulate. Here, we present two designs of a molecular noise generator that allow for the flexible modulation of the noise profile of a target gene. The first design uses a dual-signal mechanism that enables independent tuning of the mean and variability of an output protein. This is achieved through the combinatorial control of two signals that regulate transcription and translation separately. We then extend the design to allow for DNA copy-number regulation, which leads to a wider tuning spectrum for the output molecule. To gain a deeper understanding of the circuit's functionality in a realistic environment, we introduce variability in the input signals in order to ascertain the degree of noise induced by the control process itself. We conclude by illustrating potential applications of the noise generator, demonstrating how it could be used to ascertain the robust or fragile properties of a genetic circuit

  15. Fusion of Color and Depth Camera Data for Robust Fall Detection

    NARCIS (Netherlands)

    Josemans, W.; Englebienne, G.; Kröse, B.; Battiato, S.; Braz, J.

    2013-01-01

    The availability of cheap imaging sensors makes it possible to increase the robustness of vision-based alarm systems. This paper explores the benefit of data fusion in the application of fall detection. Falls are a common source of injury for elderly people and automatic fall detection is,

  16. Robust gates for holonomic quantum computation

    International Nuclear Information System (INIS)

    Florio, Giuseppe; Pascazio, Saverio; Facchi, Paolo; Fazio, Rosario; Giovannetti, Vittorio

    2006-01-01

    Non-Abelian geometric phases are attracting increasing interest because of possible experimental application in quantum computation. We study the effects of the environment (modeled as an ensemble of harmonic oscillators) on a holonomic transformation and write the corresponding master equation. The solution is analytically and numerically investigated and the behavior of the fidelity analyzed: fidelity revivals are observed and an optimal finite operation time is determined at which the gate is most robust against noise

  17. Robust keyword retrieval method for OCRed text

    Science.gov (United States)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  18. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    Science.gov (United States)

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  19. Robust AlGaN/GaN MMIC Receiver Components

    NARCIS (Netherlands)

    Heijningen, M. van; Janssen, J.P.B.; Vliet, F.E. van

    2009-01-01

    Apart from delivering very high output powers, GaN can also be used to realize robust receiver components, such as Low Noise Amplifiersand Switches. This paper presents the designand measurement results of two GaN X-band switch and LNA MMICs, designed for integration in a radar front end. The switch

  20. Automatic Speech Acquisition and Recognition for Spacesuit Audio Systems

    Science.gov (United States)

    Ye, Sherry

    2015-01-01

    NASA has a widely recognized but unmet need for novel human-machine interface technologies that can facilitate communication during astronaut extravehicular activities (EVAs), when loud noises and strong reverberations inside spacesuits make communication challenging. WeVoice, Inc., has developed a multichannel signal-processing method for speech acquisition in noisy and reverberant environments that enables automatic speech recognition (ASR) technology inside spacesuits. The technology reduces noise by exploiting differences between the statistical nature of signals (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, ASR accuracy can be improved to the level at which crewmembers will find the speech interface useful. System components and features include beam forming/multichannel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, and ASR decoding. Arithmetic complexity models were developed and will help designers of real-time ASR systems select proper tasks when confronted with constraints in computational resources. In Phase I of the project, WeVoice validated the technology. The company further refined the technology in Phase II and developed a prototype for testing and use by suited astronauts.

  1. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  2. Enabling Rapid and Robust Structural Analysis During Conceptual Design

    Science.gov (United States)

    Eldred, Lloyd B.; Padula, Sharon L.; Li, Wu

    2015-01-01

    This paper describes a multi-year effort to add a structural analysis subprocess to a supersonic aircraft conceptual design process. The desired capabilities include parametric geometry, automatic finite element mesh generation, static and aeroelastic analysis, and structural sizing. The paper discusses implementation details of the new subprocess, captures lessons learned, and suggests future improvements. The subprocess quickly compares concepts and robustly handles large changes in wing or fuselage geometry. The subprocess can rank concepts with regard to their structural feasibility and can identify promising regions of the design space. The automated structural analysis subprocess is deemed robust and rapid enough to be included in multidisciplinary conceptual design and optimization studies.

  3. Automatic Segmentation of Vessels in In-Vivo Ultrasound Scans

    DEFF Research Database (Denmark)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin

    2017-01-01

    presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs......Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper...... a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers ”8L2 Linear” and ”10L2w Wide Linear” (BK Ultrasound, Herlev, Denmark). The algorithm...

  4. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  5. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu

    2009-03-02

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  6. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    International Nuclear Information System (INIS)

    Kish, Laszlo B.

    2009-01-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart

  7. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Science.gov (United States)

    Kish, Laszlo B.

    2009-03-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case ( N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  8. Source localization analysis using seismic noise data acquired in exploration geophysics

    Science.gov (United States)

    Roux, P.; Corciulo, M.; Campillo, M.; Dubuq, D.

    2011-12-01

    Passive monitoring using seismic noise data shows a growing interest at exploration scale. Recent studies demonstrated source localization capability using seismic noise cross-correlation at observation scales ranging from hundreds of kilometers to meters. In the context of exploration geophysics, classical localization methods using travel-time picking fail when no evident first arrivals can be detected. Likewise, methods based on the intensity decrease as a function of distance to the source also fail when the noise intensity decay gets more complicated than the power-law expected from geometrical spreading. We propose here an automatic procedure developed in ocean acoustics that permits to iteratively locate the dominant and secondary noise sources. The Matched-Field Processing (MFP) technique is based on the spatial coherence of raw noise signals acquired on a dense array of receivers in order to produce high-resolution source localizations. Standard MFP algorithms permits to locate the dominant noise source by matching the seismic noise Cross-Spectral Density Matrix (CSDM) with the equivalent CSDM calculated from a model and a surrogate source position that scans each position of a 3D grid below the array of seismic sensors. However, at exploration scale, the background noise is mostly dominated by surface noise sources related to human activities (roads, industrial platforms,..), which localization is of no interest for the monitoring of the hydrocarbon reservoir. In other words, the dominant noise sources mask lower-amplitude noise sources associated to the extraction process (in the volume). Their location is therefore difficult through standard MFP technique. The Multi-Rate Adaptative Beamforming (MRABF) is a further improvement of the MFP technique that permits to locate low-amplitude secondary noise sources using a projector matrix calculated from the eigen-value decomposition of the CSDM matrix. The MRABF approach aims at cancelling the contributions of

  9. Improved automatic filtering methodology for an optimal pharmacokinetic modelling of DCE-MR images of the prostate

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez Martinez, V.; Bosch Roig, I.; Sanz Requena, R.

    2016-07-01

    In Dynamic Contrast-Enhanced Magnetic Resonance (DCEMR) studies with high temporal resolution, images are quite noisy due to the complicate balance between temporal and spatial resolution. For this reason, the temporal curves extracted from the images present remarkable noise levels and, because of that, the pharmacokinetic parameters calculated by least squares fitting from the curves and the arterial phase (a useful marker in tumour diagnosis which appears in curves with high arterial contribution) are affected. In order to solve these limitations, an automatic filtering method was developed by our group. In this work, an advanced automatic filtering methodology is presented to further improve noise reduction of the temporal curves in order to obtain more accurate kinetic parameters and a proper modelling of the arterial phase. (Author)

  10. Liquid chromatography-mass spectrometry platform for both small neurotransmitters and neuropeptides in blood, with automatic and robust solid phase extraction

    Science.gov (United States)

    Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa

    2015-03-01

    Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.

  11. Results from the Dutch speech-in-noise screening test by telephone

    NARCIS (Netherlands)

    Smits, C.H.M.; Houtgast, T.

    2005-01-01

    OBJECTIVE: The objective of the study was to implement a previously developed automatic speech-in-noise screening test by telephone (Smits, Kapteyn, & Houtgast, 2004), introduce it nationwide as a self-test, and analyze the results. DESIGN: The test was implemented on an interactive voice response

  12. Absolute negative mobility induced by white Poissonian noise

    International Nuclear Information System (INIS)

    Spiechowicz, J; Łuczka, J; Hänggi, P

    2013-01-01

    We study the transport properties of inertial Brownian particles which move in a symmetric periodic potential and are subjected to both a symmetric, unbiased time-periodic external force and a biased Poissonian white shot noise (of non-zero average F) which is composed of a random sequence of δ-shaped pulses with random amplitudes. Upon varying the parameters of the white shot noise, one can conveniently manipulate the transport direction and the overall nonlinear response behavior. We find that within tailored parameter regimes the response is opposite to the applied average bias F of such white shot noise. This particular transport characteristic thus mimics that of a nonlinear absolute negative mobility (ANM) regime. Moreover, such white shot noise driven ANM is robust with respect to the statistics of the shot noise spikes. Our findings can be checked and corroborated experimentally by the use of a setup that consists of a single resistively and capacitively shunted Josephson junction device. (paper)

  13. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    Science.gov (United States)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http

  14. Determination of noise sources and space-dependent reactor transfer functions from measured output signals only

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E.; van Dam, H.; Kleiss, E.B.J.; van Uitert, G.C.; Veldhuis, D.

    1982-01-01

    The measured cross power spectral densities of the signals from three neutron detectors and the displacement of the control rod of the 2 MW research reactor HOR at Delft have been used to determine the space-dependent reactor transfer function, the transfer function of the automatic reactor control system and the noise sources influencing the measured signals. From a block diagram of the reactor with control system and noise sources expressions were derived for the measured cross power spectral densities, which were adjusted to satisfy the requirements following from the adopted model. Then for each frequency point the required transfer functions and noise sources could be derived. The results are in agreement with those of autoregressive modelling of the reactor control feed-back loop. A method has been developed to determine the non-linear characteristics of the automatic reactor control system by analysing the non-gaussian probability density function of the power fluctuations.

  15. Determination of noise sources and space-dependent reactor transfer functions from measured output signals only

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1982-01-01

    The measured cross power spectral densities of the signals from three neutron detectors and the displacement of the control rod of the 2 MW research reactor HOR at Delft have been used to determine the space-dependent reactor transfer function, the transfer function of the automatic reactor control system and the noise sources influencing the measured signals. From a block diagram of the reactor with control system and noise sources expressions were derived for the measured cross power spectral densities, which were adjusted to satisfy the requirements following from the adopted model. Then for each frequency point the required transfer functions and noise sources could be derived. The results are in agreement with those of autoregressive modelling of the reactor control feed-back loop. A method has been developed to determine the non-linear characteristics of the automatic reactor control system by analysing the non-gaussian probability density function of the power fluctuations. (author)

  16. Noise robust automatic speech recognition with adaptive quantile based noise estimation and speech band emphasizing filter bank

    DEFF Research Database (Denmark)

    Bonde, Casper Stork; Graversen, Carina; Gregersen, Andreas Gregers

    2005-01-01

    and standard MFCC. AQBNE also outperforms the Aurora Baseline for the Medium Mismatch (MM) and Well Matched (WM) conditions. Though for all three conditions, the Aurora Advanced Frontend achieves superior performance, the AQBNE is still a relevant method to consider for small foot print applications....

  17. Model tracking dual stochastic controller design under irregular internal noises

    International Nuclear Information System (INIS)

    Lee, Jong Bok; Heo, Hoon; Cho, Yun Hyun; Ji, Tae Young

    2006-01-01

    Although many methods about the control of irregular external noise have been introduced and implemented, it is still necessary to design a controller that will be more effective and efficient methods to exclude for various noises. Accumulation of errors due to model tracking, internal noises (thermal noise, shot noise and l/f noise) that come from elements such as resistor, diode and transistor etc. in the circuit system and numerical errors due to digital process often destabilize the system and reduce the system performance. New stochastic controller is adopted to remove those noises using conventional controller simultaneously. Design method of a model tracking dual controller is proposed to improve the stability of system while removing external and internal noises. In the study, design process of the model tracking dual stochastic controller is introduced that improves system performance and guarantees robustness under irregular internal noises which can be created internally. The model tracking dual stochastic controller utilizing F-P-K stochastic control technique developed earlier is implemented to reveal its performance via simulation

  18. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  19. Genetic noise control via protein oligomerization

    Directory of Open Access Journals (Sweden)

    Almaas Eivind

    2008-11-01

    Full Text Available Abstract Background Gene expression in a cell entails random reaction events occurring over disparate time scales. Thus, molecular noise that often results in phenotypic and population-dynamic consequences sets a fundamental limit to biochemical signaling. While there have been numerous studies correlating the architecture of cellular reaction networks with noise tolerance, only a limited effort has been made to understand the dynamic role of protein-protein interactions. Results We have developed a fully stochastic model for the positive feedback control of a single gene, as well as a pair of genes (toggle switch, integrating quantitative results from previous in vivo and in vitro studies. In particular, we explicitly account for the fast binding-unbinding kinetics among proteins, RNA polymerases, and the promoter/operator sequences of DNA. We find that the overall noise-level is reduced and the frequency content of the noise is dramatically shifted to the physiologically irrelevant high-frequency regime in the presence of protein dimerization. This is independent of the choice of monomer or dimer as transcription factor and persists throughout the multiple model topologies considered. For the toggle switch, we additionally find that the presence of a protein dimer, either homodimer or heterodimer, may significantly reduce its random switching rate. Hence, the dimer promotes the robust function of bistable switches by preventing the uninduced (induced state from randomly being induced (uninduced. Conclusion The specific binding between regulatory proteins provides a buffer that may prevent the propagation of fluctuations in genetic activity. The capacity of the buffer is a non-monotonic function of association-dissociation rates. Since the protein oligomerization per se does not require extra protein components to be expressed, it provides a basis for the rapid control of intrinsic or extrinsic noise. The stabilization of regulatory circuits

  20. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  1. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  2. GHZ argument for four-qubit entangled states in the presence of white and colored noise

    International Nuclear Information System (INIS)

    Shi Mingjun; Ren Changliang; Chong Bo; Du Jiangfeng

    2008-01-01

    Greenberger-Horn-Zeilinger (GHZ) argument of nonlocality without inequalities is extended to the case of four-qubit mixed states. Three different kinds of entangled states are analyzed in presence of white and colored noise. The nonlocality properties of these states will be weakened and destroyed by the noise. We found that all these states have the same ability to resist the influence of white noise, while the cluster state is the most robust against colored noise

  3. Monitoring ship noise to assess the impact of coastal developments on marine mammals.

    Science.gov (United States)

    Merchant, Nathan D; Pirotta, Enrico; Barton, Tim R; Thompson, Paul M

    2014-01-15

    The potential impacts of underwater noise on marine mammals are widely recognised, but uncertainty over variability in baseline noise levels often constrains efforts to manage these impacts. This paper characterises natural and anthropogenic contributors to underwater noise at two sites in the Moray Firth Special Area of Conservation, an important marine mammal habitat that may be exposed to increased shipping activity from proposed offshore energy developments. We aimed to establish a pre-development baseline, and to develop ship noise monitoring methods using Automatic Identification System (AIS) and time-lapse video to record trends in noise levels and shipping activity. Our results detail the noise levels currently experienced by a locally protected bottlenose dolphin population, explore the relationship between broadband sound exposure levels and the indicators proposed in response to the EU Marine Strategy Framework Directive, and provide a ship noise assessment toolkit which can be applied in other coastal marine environments. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  5. A manual-control approach to development of VTOL automatic landing technology.

    Science.gov (United States)

    Kelly, J. R.; Niessen, F. R.; Garren, J. F., Jr.

    1973-01-01

    The operation of VTOL aircraft in the city-center environment will require complex landing-approach trajectories that insure adequate clearance from other traffic and obstructions and provide the most direct routing for efficient operations. As part of a larger program to develop the necessary technology base, a flight investigation was undertaken to study the problems associated with manual and automatic control of steep, decelerating instrument approaches and landings. The study employed a three-cue flight director driven by control laws developed and refined during manual-control studies and subsequently applied to the automatic approach problem. The validity of this approach was demonstrated by performing the first automatic approach and landings to a predetermined spot ever accomplished with a helicopter. The manual-control studies resulted in the development of a constant-attitude deceleration profile and a low-noise navigation system.

  6. Robust automated classification of first-motion polarities for focal mechanism determination with machine learning

    Science.gov (United States)

    Ross, Z. E.; Meier, M. A.; Hauksson, E.

    2017-12-01

    Accurate first-motion polarities are essential for determining earthquake focal mechanisms, but are difficult to measure automatically because of picking errors and signal to noise issues. Here we develop an algorithm for reliable automated classification of first-motion polarities using machine learning algorithms. A classifier is designed to identify whether the first-motion polarity is up, down, or undefined by examining the waveform data directly. We first improve the accuracy of automatic P-wave onset picks by maximizing a weighted signal/noise ratio for a suite of candidate picks around the automatic pick. We then use the waveform amplitudes before and after the optimized pick as features for the classification. We demonstrate the method's potential by training and testing the classifier on tens of thousands of hand-made first-motion picks by the Southern California Seismic Network. The classifier assigned the same polarity as chosen by an analyst in more than 94% of the records. We show that the method is generalizable to a variety of learning algorithms, including neural networks and random forest classifiers. The method is suitable for automated processing of large seismic waveform datasets, and can potentially be used in real-time applications, e.g. for improving the source characterizations of earthquake early warning algorithms.

  7. Robust audio-visual speech recognition under noisy audio-video conditions.

    Science.gov (United States)

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  8. Automatic measurement of the radioactive mercury uptake by the kidney

    International Nuclear Information System (INIS)

    Zurowski, S.; Raynaud, C.; CEA, 91 - Orsay

    1976-01-01

    An entirely automatic method to measure the Hg uptake by the kidney is proposed. The following operations are carried out in succession: measurement of extrarenal activity, demarcation of uptake areas, anatomical identification of uptake areas, separation of overlapping organ images and measurement of kidney depth. The first results thus calculated on 30 patients are very close to those obtained with a standard manual method and are highly encouraging. Two important points should be stressed: a broad demarcation of the uptake areas is necessary and an original method, that of standard errors, is useful for the background noise determination and uptake area demarcation. This automatic measurement technique is so designed that it can be applied to other special cases [fr

  9. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  10. A Robust H∞ Controller for an UAV Flight Control System

    Directory of Open Access Journals (Sweden)

    J. López

    2015-01-01

    Full Text Available The objective of this paper is the implementation and validation of a robust H∞ controller for an UAV to track all types of manoeuvres in the presence of noisy environment. A robust inner-outer loop strategy is implemented. To design the H∞ robust controller in the inner loop, H∞ control methodology is used. The two controllers that conform the outer loop are designed using the H∞ Loop Shaping technique. The reference vector used in the control architecture formed by vertical velocity, true airspeed, and heading angle, suggests a nontraditional way to pilot the aircraft. The simulation results show that the proposed control scheme works well despite the presence of noise and uncertainties, so the control system satisfies the requirements.

  11. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    Science.gov (United States)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  12. Phase transitions in distributed control systems with multiplicative noise

    Science.gov (United States)

    Allegra, Nicolas; Bamieh, Bassam; Mitra, Partha; Sire, Clément

    2018-01-01

    Contemporary technological challenges often involve many degrees of freedom in a distributed or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a graph with limited communication resources, hindering centralized control; the communication is subject to noise; and the number of variables can be very large. These three aspects make tools and techniques from statistical physics particularly suitable for the performance analysis of such networked systems in the limit of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly, phase-transition like phenomena appear in these systems, where a sharp change in performance can be observed with a smooth parameter variation, with the change becoming discontinuous or singular in the limit of infinite system size. In this paper, we analyze the so called network consensus problem, prototypical of the above considerations, that has previously been analyzed mostly in the context of additive noise. We show that qualitatively new phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending on dimensions, and on the presence or absence of a conservation law, the system performance shows a discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for certain classes of centralized control problems. Although our study involves a ‘toy’ model, we believe that the qualitative features are generic, with implications for the robust stability of distributed control systems, as well as the effect of roundoff errors and communication noise on distributed algorithms.

  13. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison.

    Science.gov (United States)

    van de Schoot, A J A J; Visser, J; van Kesteren, Z; Janssen, T M; Rasch, C R N; Bel, A

    2016-02-21

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D(99%)) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D(99%), rectum V(30Gy) and bladder V(40Gy) to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D(99%) on average by 0.2 Gy and decreased the median rectum V(30Gy) and median bladder V(40Gy) on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal

  14. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison

    International Nuclear Information System (INIS)

    Van de Schoot, A J A J; Visser, J; Van Kesteren, Z; Rasch, C R N; Bel, A; Janssen, T M

    2016-01-01

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D 99% ) and OAR doses (rectum V 30Gy ; bladder V 40Gy ). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D 99% , rectum V 30Gy and bladder V 40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D 99% on average by 0.2 Gy and decreased the median rectum V 30Gy and median bladder V 40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in

  15. Automatic humidification system to support the assessment of food drying processes

    Science.gov (United States)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  16. Background Noise Removal in Ultrasonic B-scan Images Using Iterative Statistical Techniques

    NARCIS (Netherlands)

    Wells, I.; Charlton, P. C.; Mosey, S.; Donne, K. E.

    2008-01-01

    The interpretation of ultrasonic B-scan images can be a time-consuming process and its success depends on operator skills and experience. Removal of the image background will potentially improve its quality and hence improve operator diagnosis. An automatic background noise removal algorithm is

  17. Automatic liquid nitrogen feeding device

    International Nuclear Information System (INIS)

    Gillardeau, J.; Bona, F.; Dejachy, G.

    1963-01-01

    An automatic liquid nitrogen feeding device has been developed (and used) in the framework of corrosion tests realized with constantly renewed uranium hexafluoride. The issue was to feed liquid nitrogen to a large capacity metallic trap in order to condensate uranium hexafluoride at the exit of the corrosion chambers. After having studied various available devices, a feeding device has been specifically designed to be robust, secure and autonomous, as well as ensuring a high liquid nitrogen flowrate and a highly elevated feeding frequency. The device, made of standard material, has been used during 4000 hours without any problem [fr

  18. Robust modal curvature features for identifying multiple damage in beams

    Science.gov (United States)

    Ostachowicz, Wiesław; Xu, Wei; Bai, Runbo; Radzieński, Maciej; Cao, Maosen

    2014-03-01

    Curvature mode shape is an effective feature for damage detection in beams. However, it is susceptible to measurement noise, easily impairing its advantage of sensitivity to damage. To deal with this deficiency, this study formulates an improved curvature mode shape for multiple damage detection in beams based on integrating a wavelet transform (WT) and a Teager energy operator (TEO). The improved curvature mode shape, termed the WT - TEO curvature mode shape, has inherent capabilities of immunity to noise and sensitivity to damage. The proposed method is experimentally validated by identifying multiple cracks in cantilever steel beams with the mode shapes acquired using a scanning laser vibrometer. The results demonstrate that the improved curvature mode shape can identify multiple damage accurately and reliably, and it is fairly robust to measurement noise.

  19. Automatic welding detection by an intelligent tool pipe inspection

    Science.gov (United States)

    Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.

    2015-07-01

    This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.

  20. Using 3D spatial correlations to improve the noise robustness of multi component analysis of 3D multi echo quantitative T2 relaxometry data.

    Science.gov (United States)

    Kumar, Dushyant; Hariharan, Hari; Faizy, Tobias D; Borchert, Patrick; Siemonsen, Susanne; Fiehler, Jens; Reddy, Ravinder; Sedlacik, Jan

    2018-05-12

    We present a computationally feasible and iterative multi-voxel spatially regularized algorithm for myelin water fraction (MWF) reconstruction. This method utilizes 3D spatial correlations present in anatomical/pathological tissues and underlying B1 + -inhomogeneity or flip angle inhomogeneity to enhance the noise robustness of the reconstruction while intrinsically accounting for stimulated echo contributions using T2-distribution data alone. Simulated data and in vivo data acquired using 3D non-selective multi-echo spin echo (3DNS-MESE) were used to compare the reconstruction quality of the proposed approach against those of the popular algorithm (the method by Prasloski et al.) and our previously proposed 2D multi-slice spatial regularization spatial regularization approach. We also investigated whether the inter-sequence correlations and agreements improved as a result of the proposed approach. MWF-quantifications from two sequences, 3DNS-MESE vs 3DNS-gradient and spin echo (3DNS-GRASE), were compared for both reconstruction approaches to assess correlations and agreements between inter-sequence MWF-value pairs. MWF values from whole-brain data of six volunteers and two multiple sclerosis patients are being reported as well. In comparison with competing approaches such as Prasloski's method or our previously proposed 2D multi-slice spatial regularization method, the proposed method showed better agreements with simulated truths using regression analyses and Bland-Altman analyses. For 3DNS-MESE data, MWF-maps reconstructed using the proposed algorithm provided better depictions of white matter structures in subcortical areas adjoining gray matter which agreed more closely with corresponding contrasts on T2-weighted images than MWF-maps reconstructed with the method by Prasloski et al. We also achieved a higher level of correlations and agreements between inter-sequence (3DNS-MESE vs 3DNS-GRASE) MWF-value pairs. The proposed algorithm provides more noise-robust

  1. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    Science.gov (United States)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  2. X-band Robust AlGaN/GaN Receiver MMICs with over 41 dBm Power Handling

    NARCIS (Netherlands)

    Janssen, J.P.B.; Heijningen, M. van; Provenzano, G.; Visser, G.C.; Morvan, E.; Vliet, F.E. van

    2008-01-01

    Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realize robust receiver components. This paper presents the design and measurement of a robust AlGaN/GaN Low Noise Amplifier and Transmit/Receive Switch MMIC. Two versions of

  3. X-Band Robust AlGaN/GaN Receiver MMICs with over 41 dBm Power Handling

    NARCIS (Netherlands)

    Janssen, J.P.B.; van Heijningen, M; Provenzano, G.; van Vliet, Frank Edward

    2008-01-01

    Abstract Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realize robust receiver components. This paper presents the design and measurement of a robust AlGaN/GaN Low Noise Amplifier and Transmit/Receive Switch MMIC. Two

  4. The impact of different background noises on the Production Effect.

    Science.gov (United States)

    Mama, Yaniv; Fostick, Leah; Icht, Michal

    2018-04-01

    The presence of background noise has been previously shown to disrupt cognitive performance, especially memory. The amount of interference is derived from the acoustic characteristics of the noise; energetic vs. informational, steady-state vs. fluctuating. However, the literature is inconsistent concerning the effects of different types of noise on long-term memory free recall. In the present study, we tested the impact of different noises on recall of items that were learned under two conditions - silent or aloud reading, a Production Effect (PE) paradigm. As the PE represents enhanced memory for words read aloud relative to words read silently during study, we focused on the effect of noise on this robust memory phenomenon. The results showed that (a) steady-state energetic noise did not affect memory, with a recall advantage for aloud words (PE), comparable to a no-noise condition, (b) fluctuating-energetic noise and fluctuating-informational (eight-talkers babble) noise eliminated the PE, with similar recall for aloud and silent items. These results are discussed in light of their theoretical implications, stressing the role of attention in the PE. Ecological implications regarding studying in noisy environments are suggested. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Recursive Estimation for Dynamical Systems with Different Delay Rates Sensor Network and Autocorrelated Process Noises

    Directory of Open Access Journals (Sweden)

    Jianxin Feng

    2014-01-01

    Full Text Available The recursive estimation problem is studied for a class of uncertain dynamical systems with different delay rates sensor network and autocorrelated process noises. The process noises are assumed to be autocorrelated across time and the autocorrelation property is described by the covariances between different time instants. The system model under consideration is subject to multiplicative noises or stochastic uncertainties. The sensor delay phenomenon occurs in a random way and each sensor in the sensor network has an individual delay rate which is characterized by a binary switching sequence obeying a conditional probability distribution. By using the orthogonal projection theorem and an innovation analysis approach, the desired recursive robust estimators including recursive robust filter, predictor, and smoother are obtained. Simulation results are provided to demonstrate the effectiveness of the proposed approaches.

  6. An automatic tuning method of a fuzzy logic controller for nuclear reactors

    International Nuclear Information System (INIS)

    Ramaswamy, P.; Lee, K.Y.; Edwards, R.M.

    1993-01-01

    The design and evaluation by simulation of an automatically tuned fuzzy logic controller is presented. Typically, fuzzy logic controllers are designed based on an expert's knowledge of the process. However, this approach has its limitations in the fact that the controller is hard to optimize or tune to get the desired control action. A method to automate the tuning process using a simplified Kalman filter approach is presented for the fuzzy logic controller to track a suitable reference trajectory. Here, for purposes of illustration an optimal controller's response is used as a reference trajectory to determine automatically the rules for the fuzzy logic controller. To demonstrate the robustness of this design approach, a nonlinear six-delayed neutron group plant is controlled using a fuzzy logic controller that utilizes estimated reactor temperatures from a one-delayed neutron group observer. The fuzzy logic controller displayed good stability and performance robustness characteristics for a wide range of operation

  7. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-01-01

    and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center

  9. P3a from white noise.

    Science.gov (United States)

    Frank, David W; Yee, Ryan B; Polich, John

    2012-08-01

    P3a and P3b event-related brain potentials (ERPs) were elicited with an auditory three-stimulus (target, distracter, and standard) discrimination task in which subjects responded only to the target. Distracter stimuli consisted of white noise or novel sounds with stimulus characteristics perceptually matched. Target/standard discrimination difficulty was manipulated by varying target/standard pitch differences to produce relatively easy, medium, and hard tasks. Error rate and response time increased with increases in task difficulty. P3a was larger for the white noise compared to novel sounds, maximum over the central/parietal recording sites, and did not differ in size across difficulty levels. P3b was unaffected by distracter type, decreased as task difficulty increased, and maximum over the parietal recording sites. The findings indicate that P3a from white noise is robust and should be useful for applied studies as it removes stimulus novelty variability. Theoretical perspectives are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  11. Frequency tracking and variable bandwidth for line noise filtering without a reference.

    Science.gov (United States)

    Kelly, John W; Collinger, Jennifer L; Degenhart, Alan D; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei

    2011-01-01

    This paper presents a method for filtering line noise using an adaptive noise canceling (ANC) technique. This method effectively eliminates the sinusoidal contamination while achieving a narrower bandwidth than typical notch filters and without relying on the availability of a noise reference signal as ANC methods normally do. A sinusoidal reference is instead digitally generated and the filter efficiently tracks the power line frequency, which drifts around a known value. The filter's learning rate is also automatically adjusted to achieve faster and more accurate convergence and to control the filter's bandwidth. In this paper the focus of the discussion and the data will be electrocorticographic (ECoG) neural signals, but the presented technique is applicable to other recordings.

  12. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments

    Directory of Open Access Journals (Sweden)

    Joan Claudi Socoró

    2017-10-01

    Full Text Available One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM. The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement.

  13. Robust Adaptive LCMV Beamformer Based On An Iterative Suboptimal Solution

    Directory of Open Access Journals (Sweden)

    Xiansheng Guo

    2015-06-01

    Full Text Available The main drawback of closed-form solution of linearly constrained minimum variance (CF-LCMV beamformer is the dilemma of acquiring long observation time for stable covariance matrix estimates and short observation time to track dynamic behavior of targets, leading to poor performance including low signal-noise-ratio (SNR, low jammer-to-noise ratios (JNRs and small number of snapshots. Additionally, CF-LCMV suffers from heavy computational burden which mainly comes from two matrix inverse operations for computing the optimal weight vector. In this paper, we derive a low-complexity Robust Adaptive LCMV beamformer based on an Iterative Suboptimal solution (RAIS-LCMV using conjugate gradient (CG optimization method. The merit of our proposed method is threefold. Firstly, RAIS-LCMV beamformer can reduce the complexity of CF-LCMV remarkably. Secondly, RAIS-LCMV beamformer can adjust output adaptively based on measurement and its convergence speed is comparable. Finally, RAIS-LCMV algorithm has robust performance against low SNR, JNRs, and small number of snapshots. Simulation results demonstrate the superiority of our proposed algorithms.

  14. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    Science.gov (United States)

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  15. On the predictability of extreme events in records with linear and nonlinear long-range memory: Efficiency and noise robustness

    Science.gov (United States)

    Bogachev, Mikhail I.; Bunde, Armin

    2011-06-01

    We study the predictability of extreme events in records with linear and nonlinear long-range memory in the presence of additive white noise using two different approaches: (i) the precursory pattern recognition technique (PRT) that exploits solely the information about short-term precursors, and (ii) the return interval approach (RIA) that exploits long-range memory incorporated in the elapsed time after the last extreme event. We find that the PRT always performs better when only linear memory is present. In the presence of nonlinear memory, both methods demonstrate comparable efficiency in the absence of white noise. When additional white noise is present in the record (which is the case in most observational records), the efficiency of the PRT decreases monotonously with increasing noise level. In contrast, the RIA shows an abrupt transition between a phase of low level noise where the prediction is as good as in the absence of noise, and a phase of high level noise where the prediction becomes poor. In the phase of low and intermediate noise the RIA predicts considerably better than the PRT, which explains our recent findings in physiological and financial records.

  16. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  17. Robust classification using mixtures of dependency networks

    DEFF Research Database (Denmark)

    Gámez, José A.; Mateo, Juan L.; Nielsen, Thomas Dyhre

    2008-01-01

    Dependency networks have previously been proposed as alternatives to e.g. Bayesian networks by supporting fast algorithms for automatic learning. Recently dependency networks have also been proposed as classification models, but as with e.g. general probabilistic inference, the reported speed......-ups are often obtained at the expense of accuracy. In this paper we try to address this issue through the use of mixtures of dependency networks. To reduce learning time and improve robustness when dealing with data sparse classes, we outline methods for reusing calculations across mixture components. Finally...

  18. Coherent network analysis technique for discriminating gravitational-wave bursts from instrumental noise

    International Nuclear Information System (INIS)

    Chatterji, Shourov; Lazzarini, Albert; Stein, Leo; Sutton, Patrick J.; Searle, Antony; Tinto, Massimo

    2006-01-01

    The sensitivity of current searches for gravitational-wave bursts is limited by non-Gaussian, nonstationary noise transients which are common in real detectors. Existing techniques for detecting gravitational-wave bursts assume the output of the detector network to be the sum of a stationary Gaussian noise process and a gravitational-wave signal. These techniques often fail in the presence of noise nonstationarities by incorrectly identifying such transients as possible gravitational-wave bursts. Furthermore, consistency tests currently used to try to eliminate these noise transients are not applicable to general networks of detectors with different orientations and noise spectra. In order to address this problem we introduce a fully coherent consistency test that is robust against noise nonstationarities and allows one to distinguish between gravitational-wave bursts and noise transients in general detector networks. This technique does not require any a priori knowledge of the putative burst waveform

  19. Hybrid Robust Optimization for the Design of a Smartphone Metal Frame Antenna

    Directory of Open Access Journals (Sweden)

    Sungwoo Lee

    2018-01-01

    Full Text Available Hybrid robust optimization that combines a genetical swarm optimization (GSO scheme with an orthogonal array (OA is proposed to design an antenna robust to the tolerances arising during the fabrication process of the antenna in this paper. An inverted-F antenna with a metal frame serves as an example to explain the procedure of the proposed method. GSO is adapted to determine the design variables of the antenna, which operates on the GSM850 band (824–894 MHz. The robustness of the antenna is evaluated through a noise test using the OA. The robustness of the optimized antenna is improved by approximately 61.3% relative to that of a conventional antenna. Conventional and optimized antennas are fabricated and measured to validate the experimental results.

  20. Comparison of robust H∞ filter and Kalman filter for initial alignment of inertial navigation system

    Institute of Scientific and Technical Information of China (English)

    HAO Yan-ling; CHEN Ming-hui; LI Liang-jun; XU Bo

    2008-01-01

    There are many filtering methods that can be used for the initial alignment of an integrated inertial navigation system.This paper discussed the use of GPS,but focused on two kinds of filters for the initial alignment of an integrated strapdown inertial navigation system (SINS).One method is based on the Kalman filter (KF),and the other is based on the robust filter.Simulation results showed that the filter provides a quick transient response and a little more accurate estimate than KF,given substantial process noise or unknown noise statistics.So the robust filter is an effective and useful method for initial alignment of SINS.This research should make the use of SINS more popular,and is also a step for further research.

  1. Performance Evaluation and Robustness Testing of Advanced Oscilloscope Triggering Schemes

    Directory of Open Access Journals (Sweden)

    Shakeb A. KHAN

    2010-01-01

    Full Text Available In this paper, performance and robustness of two advanced oscilloscope triggering schemes is evaluated. The problem of time period measurement of complex waveforms can be solved using the algorithms, which utilize the associative memory network based weighted hamming distance (Whd and autocorrelation based techniques. Robustness of both the advanced techniques, are then evaluated by simulated addition of random noise of different levels to complex test signals waveforms, and minimum value of Whd (Whd min and peak value of coefficient of correlation(COCmax are computed over 10000 cycles of the selected test waveforms. The distance between mean value of second lowest value of Whd and Whd min and distance between second highest value of coefficient of correlation (COC and COC max are used as parameters to analyze the robustness of considered techniques. From the results, it is found that both the techniques are capable of producing trigger pulses efficiently; but correlation based technique is found to be better from robustness point of view.

  2. Aircraft noise: effects on macro- and microstructure of sleep.

    Science.gov (United States)

    Basner, Mathias; Glatz, Christian; Griefahn, Barbara; Penzel, Thomas; Samel, Alexander

    2008-05-01

    The effects of aircraft noise on sleep macrostructure (Rechtschaffen and Kales) and microstructure (American Sleep Disorders Association [ASDA] arousal criteria) were investigated. For each of 10 subjects (mean age 35.3 years, 5 males), a baseline night without aircraft noise (control), and two nights with exposure to 64 noise events with a maximum sound pressure level (SPL) of either 45 or 65 dBA were chosen. Spontaneous and noise-induced alterations during sleep classified as arousals (ARS), changes to lighter sleep stages (CSS), awakenings including changes to sleep stage 1 (AS1), and awakenings (AWR) were analyzed. The number of events per night increased in the order AWR, AS1, CSS, and ARS under control conditions as well as under the two noise conditions. Furthermore, probabilities for sleep disruptions increased with increasing noise level. ARS were observed about fourfold compared to AWR, irrespective of control or noise condition. Under the conditions investigated, different sleep parameters show different sensitivities, but also different specificities for noise-induced sleep disturbances. We conclude that most information on sleep disturbances can be achieved by investigating robust classic parameters like AWR or AS1, although ASDA electroencephalographic (EEG) arousals might add relevant information in situations with low maximum SPLs, chronic sleep deprivation or chronic exposure.

  3. A Robust H ∞ Controller for an UAV Flight Control System.

    Science.gov (United States)

    López, J; Dormido, R; Dormido, S; Gómez, J P

    2015-01-01

    The objective of this paper is the implementation and validation of a robust H ∞ controller for an UAV to track all types of manoeuvres in the presence of noisy environment. A robust inner-outer loop strategy is implemented. To design the H ∞ robust controller in the inner loop, H ∞ control methodology is used. The two controllers that conform the outer loop are designed using the H ∞ Loop Shaping technique. The reference vector used in the control architecture formed by vertical velocity, true airspeed, and heading angle, suggests a nontraditional way to pilot the aircraft. The simulation results show that the proposed control scheme works well despite the presence of noise and uncertainties, so the control system satisfies the requirements.

  4. Intelligent and robust optimization frameworks for smart grids

    Science.gov (United States)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic

  5. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  6. Low noise omnidirectional optical receiver for the mobile FSO networks

    Science.gov (United States)

    Witas, Karel; Hejduk, Stanislav; Vasinek, Vladimir; Vitasek, Jan; Latal, Jan

    2013-05-01

    A high sensitive optical receiver design for the mobile free space optical (FSO) networks is presented. There is an array of photo-detectors and preamplifiers working into same load. It is the second stage sum amplifier getting all signals together. This topology creates a parallel amplifier with an excellent signal to noise ratio (SNR). An automatic gain control (AGC) feature is included also. As a result, the effective noise suppression at the receiver side increases optical signal coverage even with the transmitter power being constant. The design has been verified on the model car which was able to respond beyond the line of sight (LOS).

  7. Automatic Generation of Facial Expression Using Triangular Geometric Deformation

    OpenAIRE

    Jia-Shing Sheu; Tsu-Shien Hsieh; Ho-Nien Shou

    2014-01-01

    This paper presents an image deformation algorithm and constructs an automatic facial expression generation system to generate new facial expressions in neutral state. After the users input the face image in a neutral state into the system, the system separates the possible facial areas and the image background by skin color segmentation. It then uses the morphological operation to remove noise and to capture the organs of facial expression, such as the eyes, mouth, eyebrow, and nose. The fea...

  8. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    Science.gov (United States)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  9. QUALITY IMPROVEMENT IN MULTIRESPONSE EXPERIMENTS THROUGH ROBUST DESIGN METHODOLOGY

    Directory of Open Access Journals (Sweden)

    M. Shilpa

    2012-06-01

    Full Text Available Robust design methodology aims at reducing the variability in the product performance in the presence of noise factors. Experiments involving simultaneous optimization of more than one quality characteristic are known as multiresponse experiments which are used in the development and improvement of industrial processes and products. In this paper, robust design methodology is applied to optimize the process parameters during a particular operation of rotary driving shaft manufacturing process. The three important quality characteristics of the shaft considered here are of type Nominal-the-best, Smaller-the-better and Fraction defective. Simultaneous optimization of these responses is carried out by identifying the control parameters and conducting the experimentation using L9 orthogonal array.

  10. A New Robust Tracking Control Design for Turbofan Engines: H∞/Leitmann Approach

    Directory of Open Access Journals (Sweden)

    Muxuan Pan

    2017-04-01

    Full Text Available In this paper, a H ∞ /Leitmann approach to the robust tracking control design is presented for an uncertain dynamic system. This new method is developed in the following two steps. Firstly, a tracking dynamic system with simultaneous consideration of parameter uncertainty and noise is modeled based on a linear system and a reference model. Accordingly, a “nominal system” from the tracking system is defined and controlled by a H ∞ control to obtain the asymptotical stability and noise resistance. Secondly, by making use of a Lyapunov function and the norm boundedness, a new robust control with the “Leitmann approach” is designed to cope with the uncertainty. The two controls collaborate with each other to achieve “uniform tracking boundedness” and “uniform ultimate tracking boundedness”. The new approach is then applied to an aircraft turbofan control design, and the numerical simulation results show the prescribed performances of the closed-loop system and the advantage of the developed approach.

  11. An H(∞) control approach to robust learning of feedforward neural networks.

    Science.gov (United States)

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. A Context Dependent Automatic Target Recognition System

    Science.gov (United States)

    Kim, J. H.; Payton, D. W.; Olin, K. E.; Tseng, D. Y.

    1984-06-01

    This paper describes a new approach to automatic target recognizer (ATR) development utilizing artificial intelligent techniques. The ATR system exploits contextual information in its detection and classification processes to provide a high degree of robustness and adaptability. In the system, knowledge about domain objects and their contextual relationships is encoded in frames, separating it from low level image processing algorithms. This knowledge-based system demonstrates an improvement over the conventional statistical approach through the exploitation of diverse forms of knowledge in its decision-making process.

  13. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    Lu Wei; Nystrom, Michelle M.; Parikh, Parag J.; Fooshee, David R.; Hubenschmidt, James P.; Bradley, Jeffrey D.; Low, Daniel A.

    2006-01-01

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  14. An integer optimization algorithm for robust identification of non-linear gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Chemmangattuvalappil Nishanth

    2012-09-01

    Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters

  15. Eigennoise Speech Recovery in Adverse Environments with Joint Compensation of Additive and Convolutive Noise

    Directory of Open Access Journals (Sweden)

    Trung-Nghia Phung

    2015-01-01

    Full Text Available The learning-based speech recovery approach using statistical spectral conversion has been used for some kind of distorted speech as alaryngeal speech and body-conducted speech (or bone-conducted speech. This approach attempts to recover clean speech (undistorted speech from noisy speech (distorted speech by converting the statistical models of noisy speech into that of clean speech without the prior knowledge on characteristics and distributions of noise source. Presently, this approach has still not attracted many researchers to apply in general noisy speech enhancement because of some major problems: those are the difficulties of noise adaptation and the lack of noise robust synthesizable features in different noisy environments. In this paper, we adopted the methods of state-of-the-art voice conversions and speaker adaptation in speech recognition to the proposed speech recovery approach applied in different kinds of noisy environment, especially in adverse environments with joint compensation of additive and convolutive noises. We proposed to use the decorrelated wavelet packet coefficients as a low-dimensional robust synthesizable feature under noisy environments. We also proposed a noise adaptation for speech recovery with the eigennoise similar to the eigenvoice in voice conversion. The experimental results showed that the proposed approach highly outperformed traditional nonlearning-based approaches.

  16. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    Science.gov (United States)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  17. MEMS microphone innovations towards high signal to noise ratios (Conference Presentation) (Plenary Presentation)

    Science.gov (United States)

    Dehé, Alfons

    2017-06-01

    After decades of research and more than ten years of successful production in very high volumes Silicon MEMS microphones are mature and unbeatable in form factor and robustness. Audio applications such as video, noise cancellation and speech recognition are key differentiators in smart phones. Microphones with low self-noise enable those functions. Backplate-free microphones enter the signal to noise ratios above 70dB(A). This talk will describe state of the art MEMS technology of Infineon Technologies. An outlook on future technologies such as the comb sensor microphone will be given.

  18. Fault-tolerant controlled quantum secure direct communication over a collective quantum noise channel

    International Nuclear Information System (INIS)

    Yang, Chun-Wei; Hwang, Tzonelih; Tsai, Chia-Wei

    2014-01-01

    This work proposes controlled quantum secure direct communication (CQSDC) over an ideal channel. Based on the proposed CQSDC, two fault-tolerant CQSDC protocols that are robust under two kinds of collective noises, collective-dephasing noise and collective-rotation noise, respectively, are constructed. Due to the use of quantum entanglement of the Bell state (or logical Bell state) as well as dense coding, the proposed protocols provide easier implementation as well as better qubit efficiency than other CQSDC protocols. Furthermore, the proposed protocols are also free from correlation-elicitation attack and other well-known attacks. (paper)

  19. Inference of physical phenomena from FFTF [Fast Flux Test Facility] noise analysis

    International Nuclear Information System (INIS)

    Thie, J.A.; Damiano, B.; Campbell, L.R.

    1989-01-01

    The source of features observed in noise spectra collected by an automated data collection system operated by the Oak Ridge National Laboratory at the Fast Flux Test Facility (FFTF) can be identified using a methodology based on careful data observation and intuition. When a large collection of data is available, as in this case, automatic pattern recognition and parameter storage and retrieval using a data base can be used to extract useful information. However, results can be limited to empirical signature comparison monitoring unless an effort is made to determine the noise sources. This paper describes the identification of several FFTF noise data phenomena and suggests how this understanding may lead to new or enhanced monitoring. 13 refs., 4 figs

  20. Noise-induced chaos and basin erosion in softening Duffing oscillator

    International Nuclear Information System (INIS)

    Gan Chunbiao

    2005-01-01

    It is common for many dynamical systems to have two or more attractors coexist and in such cases the basin boundary is fractal. The purpose of this paper is to study the noise-induced chaos and discuss the effect of noises on erosion of safe basin in the softening Duffing oscillator. The Melnikov approach is used to obtain the necessary condition for the rising of chaos, and the largest Lyapunov exponent is computed to identify the chaotic nature of the sample time series from the system. According to the Melnikov condition, the safe basins are simulated for both the deterministic and the stochastic cases of the system. It is shown that the external Gaussian white noise excitation is robust for inducing the chaos, while the external bounded noise is weak. Moreover, the erosion of the safe basin can be aggravated by both the Gaussian white and the bounded noise excitations, and fractal boundary can appear when the system is only excited by the random processes, which means noise-induced chaotic response is induced

  1. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  2. Automatic first-arrival picking based on extended super-virtual interferometry with quality control procedure

    Science.gov (United States)

    An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao

    2017-12-01

    Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.

  3. Software design of automatic counting system for nuclear track based on mathematical morphology algorithm

    International Nuclear Information System (INIS)

    Pan Yi; Mao Wanchong

    2010-01-01

    The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)

  4. Robust multi-objective calibration strategies – possibilities for improving flood forecasting

    Directory of Open Access Journals (Sweden)

    G. H. Schmitz

    2012-10-01

    Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify

  5. Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization

    Directory of Open Access Journals (Sweden)

    Terumasa Aoki

    2018-01-01

    Full Text Available Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s are used as reference(s to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector; namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.

  6. Application of the robust design concept for fuel loading pattern

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Ohori, Kazuma; Yamamoto, Akio

    2011-01-01

    Application of the robust design concept for fuel loading pattern design is proposed as a new approach to improve the prediction accuracy of core characteristics. The robust design is a design concept that establishes a resistant (robust) system for perturbations or noises, by properly setting design variables. In order to apply the concept of robust design to fuel loading pattern design, we focus on a theoretical approach based on the higher order perturbation method. This approach indicates that the eigenvalue separation is one of the effective indices to measure the robustness of a designed fuel loading pattern. In order to verify the effectiveness of the eigenvalue separation as an index of robustness, numerical analysis is carried out for typical 3-loop PWR cores, and we evaluated the correlation between the eigenvalue separation and the variation of relative assembly power due to the perturbation of the cross section. The numerical results show that the variation of relative power decreases as the eigenvalue separation increases; thus, it is confirmed that the eigenvalue separation is an effective index of robustness. Based on the eigenvalue separation of a fuel loading pattern, we discuss design guidelines of a fuel loading pattern to improve the robustness. For example, if each fuel assembly has independent uncertainty on its cross section, the robustness of the core can be enhanced by increasing the relative power at the center of the core. The proposed guidelines will be useful to design a loading pattern that has robustness for uncertainties due to cross section, calculation method, and so on. (author)

  7. Robust, fully automatic delineation of the head contour by stereotactical normalization for attenuation correction according to Chang in dopamine transporter scintigraphy

    Energy Technology Data Exchange (ETDEWEB)

    Lange, Catharina; Brenner, Winfried; Buchert, Ralph [Charite - Universitaetsmedizin Berlin, Department of Nuclear Medicine, Berlin (Germany); Kurth, Jens; Schwarzenboeck, Sarah; Krause, Bernd J. [Universitaetsmedizin Rostock, Department of Nuclear Medicine, Rostock (Germany); Seese, Anita; Steinhoff, Karen; Sabri, Osama; Hesse, Swen [Universitaetsklinikum Leipzig, Department of Nuclear Medicine, Leipzig (Germany); Umland-Seidler, Bert [GE Healthcare Buchler GmbH and Co. KG, Munich (Germany)

    2015-09-15

    Chang's method, the most widely used attenuation correction (AC) in brain single-photon emission computed tomography (SPECT), requires delineation of the outer contour of the head. Manual and automatic threshold-based methods are prone to errors due to variability of tracer uptake in the scalp. The present study proposes a new method for fully automated delineation of the head based on stereotactical normalization. The method was validated for SPECT with I-123-ioflupane. The new method was compared to threshold-based delineation in 62 unselected patients who had received I-123-ioflupane SPECT at one of 3 centres. The impact on diagnostic power was tested for semi-quantitative analysis and visual reading of the SPECT images (six independent readers). The two delineation methods produced highly consistent semi-quantitative results. This was confirmed by receiver operating characteristic analyses in which the putamen specific-to-background ratio achieved highest area under the curve with negligible effect of the delineation method: 0.935 versus 0.938 for stereotactical normalization and threshold-based delineation, respectively. Visual interpretation of DVR images was also not affected by the delineation method. Delineation of the head contour by stereotactical normalization appears useful for Chang AC in I-123-ioflupane SPECT. It is robust and does not require user interaction. (orig.)

  8. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  9. Timing robustness in the budding and fission yeast cell cycles.

    KAUST Repository

    Mangla, Karan

    2010-02-01

    Robustness of biological models has emerged as an important principle in systems biology. Many past analyses of Boolean models update all pending changes in signals simultaneously (i.e., synchronously), making it impossible to consider robustness to variations in timing that result from noise and different environmental conditions. We checked previously published mathematical models of the cell cycles of budding and fission yeast for robustness to timing variations by constructing Boolean models and analyzing them using model-checking software for the property of speed independence. Surprisingly, the models are nearly, but not totally, speed-independent. In some cases, examination of timing problems discovered in the analysis exposes apparent inaccuracies in the model. Biologically justified revisions to the model eliminate the timing problems. Furthermore, in silico random mutations in the regulatory interactions of a speed-independent Boolean model are shown to be unlikely to preserve speed independence, even in models that are otherwise functional, providing evidence for selection pressure to maintain timing robustness. Multiple cell cycle models exhibit strong robustness to timing variation, apparently due to evolutionary pressure. Thus, timing robustness can be a basis for generating testable hypotheses and can focus attention on aspects of a model that may need refinement.

  10. All-in-One Wafer-Level Solution for MMIC Automatic Testing

    Directory of Open Access Journals (Sweden)

    Xu Ding

    2018-04-01

    Full Text Available In this paper, we present an all-in-one wafer-level solution for MMIC (monolithic microwave integrated circuit automatic testing. The OSL (open short load two tier de-embedding, the calibration verification model, the accurate PAE (power added efficiency testing, and the optimized vector cold source NF (noise figure measurement techniques are integrated in this solution to improve the measurement accuracy. A dual-core topology formed by an IPC (industrial personal computer and a VNA (vector network analyzer, and an automatic test software based on a three-level driver architecture, are applied to enhance the test efficiency. The benefit from this solution is that all the data of a MMIC can be achieved in only one contact, which shows state-of-the-art accuracy and efficiency.

  11. Robust and Effective Component-based Banknote Recognition for the Blind.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, Yingli

    2012-11-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users.

  12. Automatic evaluation of radiographs with the REBUS system

    International Nuclear Information System (INIS)

    Keck, R.; Coen, G.

    1987-01-01

    Digital image processing has become a top rank quality assurance method in industry in the last few years, and still promises improvements in future. One of the main reasons of this development is the fact that for specific applications, digital image processing has matured from simple image processing (deletion of unimportant marginal data, edge detection, signal-to-noise improvement) to automatic image evaluation. As an example of such specific applications, the article explains the detection and classification of flows in welded seams or joints by means of radiographic testing. (orig./HP) [de

  13. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  14. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  15. Automatic Atrial Fibrillation Detection: A Novel Approach Using Discrete Wavelet Transform and Heart Rate Variabilit

    DEFF Research Database (Denmark)

    Bruun, Iben H.; Hissabu, Semira M. S.; Poulsen, Erik S.

    2017-01-01

    be used as a screening tool for patients suspected to have AF. The method includes an automatic peak detection prior to the feature extraction, as well as a noise cancellation technique followed by a bagged tree classification. Simulation studies on the MIT-BIH Atrial Fibrillation database was performed...

  16. Phase noise mitigation of QPSK signal utilizing phase-locked multiplexing of signal harmonics and amplitude saturation.

    Science.gov (United States)

    Mohajerin-Ariaei, Amirhossein; Ziyadi, Morteza; Chitgarha, Mohammad Reza; Almaiman, Ahmed; Cao, Yinwen; Shamee, Bishara; Yang, Jeng-Yuan; Akasaka, Youichi; Sekiya, Motoyoshi; Takasaka, Shigehiro; Sugizaki, Ryuichi; Touch, Joseph D; Tur, Moshe; Langrock, Carsten; Fejer, Martin M; Willner, Alan E

    2015-07-15

    We demonstrate an all-optical phase noise mitigation scheme based on the generation, delay, and coherent summation of higher order signal harmonics. The signal, its third-order harmonic, and their corresponding delayed variant conjugates create a staircase phase-transfer function that quantizes the phase of quadrature-phase-shift-keying (QPSK) signal to mitigate phase noise. The signal and the harmonics are automatically phase-locked multiplexed, avoiding the need for phase-based feedback loop and injection locking to maintain coherency. The residual phase noise converts to amplitude noise in the quantizer stage, which is suppressed by parametric amplification in the saturation regime. Phase noise reduction of ∼40% and OSNR-gain of ∼3  dB at BER 10(-3) are experimentally demonstrated for 20- and 30-Gbaud QPSK input signals.

  17. Automatic identification of motion artifacts in EHG recording for robust analysis of uterine contractions.

    Science.gov (United States)

    Ye-Lin, Yiyao; Garcia-Casado, Javier; Prats-Boluda, Gema; Alberola-Rubio, José; Perales, Alfredo

    2014-01-01

    Electrohysterography (EHG) is a noninvasive technique for monitoring uterine electrical activity. However, the presence of artifacts in the EHG signal may give rise to erroneous interpretations and make it difficult to extract useful information from these recordings. The aim of this work was to develop an automatic system of segmenting EHG recordings that distinguishes between uterine contractions and artifacts. Firstly, the segmentation is performed using an algorithm that generates the TOCO-like signal derived from the EHG and detects windows with significant changes in amplitude. After that, these segments are classified in two groups: artifacted and nonartifacted signals. To develop a classifier, a total of eleven spectral, temporal, and nonlinear features were calculated from EHG signal windows from 12 women in the first stage of labor that had previously been classified by experts. The combination of characteristics that led to the highest degree of accuracy in detecting artifacts was then determined. The results showed that it is possible to obtain automatic detection of motion artifacts in segmented EHG recordings with a precision of 92.2% using only seven features. The proposed algorithm and classifier together compose a useful tool for analyzing EHG signals and would help to promote clinical applications of this technique.

  18. Automatic Identification of Motion Artifacts in EHG Recording for Robust Analysis of Uterine Contractions

    Directory of Open Access Journals (Sweden)

    Yiyao Ye-Lin

    2014-01-01

    Full Text Available Electrohysterography (EHG is a noninvasive technique for monitoring uterine electrical activity. However, the presence of artifacts in the EHG signal may give rise to erroneous interpretations and make it difficult to extract useful information from these recordings. The aim of this work was to develop an automatic system of segmenting EHG recordings that distinguishes between uterine contractions and artifacts. Firstly, the segmentation is performed using an algorithm that generates the TOCO-like signal derived from the EHG and detects windows with significant changes in amplitude. After that, these segments are classified in two groups: artifacted and nonartifacted signals. To develop a classifier, a total of eleven spectral, temporal, and nonlinear features were calculated from EHG signal windows from 12 women in the first stage of labor that had previously been classified by experts. The combination of characteristics that led to the highest degree of accuracy in detecting artifacts was then determined. The results showed that it is possible to obtain automatic detection of motion artifacts in segmented EHG recordings with a precision of 92.2% using only seven features. The proposed algorithm and classifier together compose a useful tool for analyzing EHG signals and would help to promote clinical applications of this technique.

  19. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  20. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  1. Secure Image Encryption Based On a Chua Chaotic Noise Generator

    Directory of Open Access Journals (Sweden)

    A. S. Andreatos

    2013-10-01

    Full Text Available This paper presents a secure image cryptography telecom system based on a Chua's circuit chaotic noise generator. A chaotic system based on synchronised Master–Slave Chua's circuits has been used as a chaotic true random number generator (CTRNG. Chaotic systems present unpredictable and complex behaviour. This characteristic, together with the dependence on the initial conditions as well as the tolerance of the circuit components, make CTRNGs ideal for cryptography. In the proposed system, the transmitter mixes an input image with chaotic noise produced by a CTRNG. Using thresholding techniques, the chaotic signal is converted to a true random bit sequence. The receiver must be able to reproduce exactly the same chaotic noise in order to subtract it from the received signal. This becomes possible with synchronisation between the two Chua's circuits: through the use of specific techniques, the trajectory of the Slave chaotic system can be bound to that of the Master circuit producing (almost identical behaviour. Additional blocks have been used in order to make the system highly parameterisable and robust against common attacks. The whole system is simulated in Matlab. Simulation results demonstrate satisfactory performance, as well as, robustness against cryptanalysis. The system works with both greyscale and colour jpg images.

  2. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Performance of an automatic dose control system for CT. Patient studies

    Energy Technology Data Exchange (ETDEWEB)

    Stumpp, P.; Gosch, D.; Kuehn, A.; Sorge, I.; Kahn, T. [Universitaetsklinikum Leipzig (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Weber, D. [St. Elisabeth-Krankenhaus Leipzig (Germany). Roentgendiagnostik; Lehmkuhl, L. [Leipzig Univ. - Herzzentrum (Germany). Diagnostische und Interventionelle Radiologie; Nagel, H.D. [Dr. HD Nagel, Wissenschaft und Technik fuer die Radiologie, Buchholz (Germany)

    2013-02-15

    Purpose: To study the effect of an automatic dose control (ADC) system with adequate noise characteristic on the individual perception of image noise and diagnostic acceptance compared to objectively measured image noise and the dose reductions achieved in a representative group of patients. Materials and Methods: In a retrospective study two matched cohorts of 20 patients each were identified: a manual cohort with exposure settings according to body size (small - regular - large) and an ADC cohort with exposure settings calculated by the ADC system (DoseRight 2.0 trademark, Philips Healthcare). For each patient, 12 images from 6 defined anatomic levels from contrast-enhanced scans of chest and abdomen/pelvis were analyzed by 4 independent readers concerning image noise and diagnostic acceptance on a five-point Likert scale and evaluated for objectively measured image noise. Radiation exposure was calculated from recorded exposure data. Results: Use of the ADC system reduced the average effective dose for patients by 36 % in chest scans (3.2 vs. 4.9 mSv) and by 17 % in abdomen/pelvis scans (7.6 vs. 8.3 mSv). Average objective noise was slightly lower in the manual cohort (11.1 vs. 12.8 HU), correlating with a slightly better rating in subjective noise score (4.4 vs. 4.2). However, diagnostic acceptance was rated almost equal in both cohorts with excellent image quality (4.6 vs. 4.5). Conclusion: Use of an ADC system with adequate noise characteristic leads to significant reductions in radiation exposure for patients while maintaining excellent image quality. (orig.)

  4. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  5. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    Science.gov (United States)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  6. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    Science.gov (United States)

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  7. Automatic intra-operative generation of geometric left atrium/pulmonary vein models from rotational X-ray angiography.

    Science.gov (United States)

    Meyer, Carsten; Manzke, Robert; Peters, Jochen; Ecabert, Olivier; Kneser, Reinhard; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2008-01-01

    Pre-procedural imaging with cardiac CT or MR has become popular for guiding complex electrophysiology procedures such as those used for atrial fibrillation ablation therapy. Electroanatomical mapping and ablation within the left atrium and pulmonary veins (LAPV) is facilitated using such data, however the pre-procedural anatomy can be quite different from that at the time of intervention. Recently, a method for intra-procedural LAPV imaging has been developed based on contrast-enhanced 3-D rotational X-ray angiography (3-D RA). These intraprocedural data now create a compelling need for rapid and automated extraction of the LAPV geometry for catheter guidance. We present a new approach to automatic intra-procedural generation of LAPV surfaces from 3-D RA volumes. Using model-based segmentation, our technique is robust to imaging noise and artifacts typical of 3-D RA imaging, strongly minimizes the user interaction time required for segmentation, and eliminates inter-subject variability. Our findings in 33 patients indicate that intra-procedural LAPV surface models accurately represent the anatomy at the time of intervention and are comparable to pre-procedural models derived from CTA or MRA.

  8. The impact of musicianship on the cortical mechanisms related to separating speech from background noise.

    Science.gov (United States)

    Zendel, Benjamin Rich; Tremblay, Charles-David; Belleville, Sylvie; Peretz, Isabelle

    2015-05-01

    Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, partially due to more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions, while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.

  9. Histogram Equalization to Model Adaptation for Robust Speech Recognition

    Directory of Open Access Journals (Sweden)

    Suh Youngjoo

    2010-01-01

    Full Text Available We propose a new model adaptation method based on the histogram equalization technique for providing robustness in noisy environments. The trained acoustic mean models of a speech recognizer are adapted into environmentally matched conditions by using the histogram equalization algorithm on a single utterance basis. For more robust speech recognition in the heavily noisy conditions, trained acoustic covariance models are efficiently adapted by the signal-to-noise ratio-dependent linear interpolation between trained covariance models and utterance-level sample covariance models. Speech recognition experiments on both the digit-based Aurora2 task and the large vocabulary-based task showed that the proposed model adaptation approach provides significant performance improvements compared to the baseline speech recognizer trained on the clean speech data.

  10. Representation of acoustic signals in the eighth nerve of the Tokay gecko. II. Masking of pure tones with noise.

    Science.gov (United States)

    Sams-Dodd, F; Capranica, R R

    1996-10-01

    Acoustic signals are generally encoded in the peripheral auditory system of vertebrates by a duality scheme. For frequency components that fall within the excitatory tuning curve, individual eighth nerve fibers can encode the effective spectral energy by a spike-rate code, while simultaneously preserving the signal waveform periodicity of lower frequency components by phase-locked spike-train discharges. To explore how robust this duality of representation may be in the presence of noise, we recorded the responses of auditory fibers in the eighth nerve of the Tokay gecko to tonal stimuli when masking noise was added simultaneously. We found that their spike-rate functions reached plateau levels fairly rapidly in the presence of noise, so the ability to signal the presence of a tone by a concomitant change in firing rate was quickly lost. On the other hand, their synchronization functions maintained a high degree of phase-locked firings to the tone even in the presence of high-intensity masking noise, thus enabling a robust detection of the tonal signal. Critical ratios (CR) and critical bandwidths showed that in the frequency range where units are able to phaselock to the tonal periodicity, the CR bands were relatively narrow and the bandwidths were independent of noise level. However, to higher frequency tones where phaselocking fails and only spike-rate codes apply, the CR bands were much wider and depended upon noise level, so that their ability to filter tones out of a noisy background degraded with increasing noise levels. The greater robustness of phase-locked temporal encoding contrasted with spike-rate coding verifies a important advantage in using lower frequency signals for communication in noisy environments.

  11. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  12. Automatic detection of arterial input function in dynamic contrast enhanced MRI based on affinity propagation clustering.

    Science.gov (United States)

    Shi, Lin; Wang, Defeng; Liu, Wen; Fang, Kui; Wang, Yi-Xiang J; Huang, Wenhua; King, Ann D; Heng, Pheng Ann; Ahuja, Anil T

    2014-05-01

    To automatically and robustly detect the arterial input function (AIF) with high detection accuracy and low computational cost in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In this study, we developed an automatic AIF detection method using an accelerated version (Fast-AP) of affinity propagation (AP) clustering. The validity of this Fast-AP-based method was proved on two DCE-MRI datasets, i.e., rat kidney and human head and neck. The detailed AIF detection performance of this proposed method was assessed in comparison with other clustering-based methods, namely original AP and K-means, as well as the manual AIF detection method. Both the automatic AP- and Fast-AP-based methods achieved satisfactory AIF detection accuracy, but the computational cost of Fast-AP could be reduced by 64.37-92.10% on rat dataset and 73.18-90.18% on human dataset compared with the cost of AP. The K-means yielded the lowest computational cost, but resulted in the lowest AIF detection accuracy. The experimental results demonstrated that both the AP- and Fast-AP-based methods were insensitive to the initialization of cluster centers, and had superior robustness compared with K-means method. The Fast-AP-based method enables automatic AIF detection with high accuracy and efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  13. [Study of CT Automatic Exposure Control System (CT-AEC) Optimization in CT Angiography of Lower Extremity Artery by Considering Contrast-to-Noise Ratio].

    Science.gov (United States)

    Inada, Satoshi; Masuda, Takanori; Maruyama, Naoya; Yamashita, Yukari; Sato, Tomoyasu; Imada, Naoyuki

    2016-01-01

    To evaluate the image quality and effect of radiation dose reduction by setting for computed tomography automatic exposure control system (CT-AEC) in computed tomographic angiography (CTA) of lower extremity artery. Two methods of setting were compared for CT-AEC [conventional and contrast-to-noise ratio (CNR) methods]. Conventional method was set noise index (NI): 14and tube current threshold: 10-750 mA. CNR method was set NI: 18, minimum tube current: (X+Y)/2 mA (X, Y: maximum X (Y)-axis tube current value of leg in NI: 14), and maximum tube current: 750 mA. The image quality was evaluated by CNR, and radiation dose reduction was evaluated by dose-length-product (DLP). In conventional method, mean CNRs for pelvis, femur, and leg were 19.9±4.8, 20.4±5.4, and 16.2±4.3, respectively. There was a significant difference between the CNRs of pelvis and leg (P<0.001), and between femur and leg (P<0.001). In CNR method, mean CNRs for pelvis, femur, and leg were 15.2±3.3, 15.3±3.2, and 15.3±3.1, respectively; no significant difference between pelvis, femur, and leg (P=0.973) in CNR method was observed. Mean DLPs were 1457±434 mGy⋅cm in conventional method, and 1049±434 mGy·cm in CNR method. There was a significant difference in the DLPs of conventional method and CNR method (P<0.001). CNR method gave equal CNRs for pelvis, femur, and leg, and was beneficial for radiation dose reduction in CTA of lower extremity artery.

  14. Adaptive Sensor Tuning for Seismic Event Detection in Environment with Electromagnetic Noise

    Science.gov (United States)

    Ziegler, Abra E.

    The goal of this research is to detect possible microseismic events at a carbon sequestration site. Data recorded on a continuous downhole microseismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project, were evaluated using machine learning and reinforcement learning techniques to determine their effectiveness at seismic event detection on a dataset with electromagnetic noise. The data were recorded from a passive vertical monitoring array consisting of 16 levels of 3-component 15 Hz geophones installed in the field and continuously recording since January 2014. Electromagnetic and other noise recorded on the array has significantly impacted the utility of the data and it was necessary to characterize and filter the noise in order to attempt event detection. Traditional detection methods using short-term average/long-term average (STA/LTA) algorithms were evaluated and determined to be ineffective because of changing noise levels. To improve the performance of event detection and automatically and dynamically detect seismic events using effective data processing parameters, an adaptive sensor tuning (AST) algorithm developed by Sandia National Laboratories was utilized. AST exploits neuro-dynamic programming (reinforcement learning) trained with historic event data to automatically self-tune and determine optimal detection parameter settings. The key metric that guides the AST algorithm is consistency of each sensor with its nearest neighbors: parameters are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The effects that changes in neighborhood configuration have on signal detection were explored, as it was determined that neighborhood-based detections significantly reduce the number of both missed and false detections in ground-truthed data. The performance of the AST algorithm was

  15. Automatic acoustic and vibration monitoring system for nuclear power plants

    International Nuclear Information System (INIS)

    Tothmatyas, Istvan; Illenyi, Andras; Kiss, Jozsef; Komaromi, Tibor; Nagy, Istvan; Olchvary, Geza

    1990-01-01

    A diagnostic system for nuclear power plant monitoring is described. Acoustic and vibration diagnostics can be applied to monitor various reactor components and auxiliary equipment including primary circuit machinery, leak detection, integrity of reactor vessel, loose parts monitoring. A noise diagnostic system has been developed for the Paks Nuclear Power Plant, to supervise the vibration state of primary circuit machinery. An automatic data acquisition and processing system is described for digitalizing and analysing diagnostic signals. (R.P.) 3 figs

  16. Neural-network-designed pulse sequences for robust control of singlet-triplet qubits

    Science.gov (United States)

    Yang, Xu-Chen; Yung, Man-Hong; Wang, Xin

    2018-04-01

    Composite pulses are essential for universal manipulation of singlet-triplet spin qubits. In the absence of noise, they are required to perform arbitrary single-qubit operations due to the special control constraint of a singlet-triplet qubit, while in a noisy environment, more complicated sequences have been developed to dynamically correct the error. Tailoring these sequences typically requires numerically solving a set of nonlinear equations. Here we demonstrate that these pulse sequences can be generated by a well-trained, double-layer neural network. For sequences designed for the noise-free case, the trained neural network is capable of producing almost exactly the same pulses known in the literature. For more complicated noise-correcting sequences, the neural network produces pulses with slightly different line shapes, but the robustness against noises remains comparable. These results indicate that the neural network can be a judicious and powerful alternative to existing techniques in developing pulse sequences for universal fault-tolerant quantum computation.

  17. Comparative Analysis for Robust Penalized Spline Smoothing Methods

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2014-01-01

    Full Text Available Smoothing noisy data is commonly encountered in engineering domain, and currently robust penalized regression spline models are perceived to be the most promising methods for coping with this issue, due to their flexibilities in capturing the nonlinear trends in the data and effectively alleviating the disturbance from the outliers. Against such a background, this paper conducts a thoroughly comparative analysis of two popular robust smoothing techniques, the M-type estimator and S-estimation for penalized regression splines, both of which are reelaborated starting from their origins, with their derivation process reformulated and the corresponding algorithms reorganized under a unified framework. Performances of these two estimators are thoroughly evaluated from the aspects of fitting accuracy, robustness, and execution time upon the MATLAB platform. Elaborately comparative experiments demonstrate that robust penalized spline smoothing methods possess the capability of resistance to the noise effect compared with the nonrobust penalized LS spline regression method. Furthermore, the M-estimator exerts stable performance only for the observations with moderate perturbation error, whereas the S-estimator behaves fairly well even for heavily contaminated observations, but consuming more execution time. These findings can be served as guidance to the selection of appropriate approach for smoothing the noisy data.

  18. Atom lasers, coherent states, and coherence II. Maximally robust ensembles of pure states

    International Nuclear Information System (INIS)

    Wiseman, H.M.; Vaccaro, John A.

    2002-01-01

    As discussed in the preceding paper [Wiseman and Vaccaro, preceding paper, Phys. Rev. A 65, 043605 (2002)], the stationary state of an optical or atom laser far above threshold is a mixture of coherent field states with random phase, or, equivalently, a Poissonian mixture of number states. We are interested in which, if either, of these descriptions of ρ ss as a stationary ensemble of pure states, is more natural. In the preceding paper we concentrated upon the question of whether descriptions such as these are physically realizable (PR). In this paper we investigate another relevant aspect of these ensembles, their robustness. A robust ensemble is one for which the pure states that comprise it survive relatively unchanged for a long time under the system evolution. We determine numerically the most robust ensembles as a function of the parameters in the laser model: the self-energy χ of the bosons in the laser mode, and the excess phase noise ν. We find that these most robust ensembles are PR ensembles, or similar to PR ensembles, for all values of these parameters. In the ideal laser limit (ν=χ=0), the most robust states are coherent states. As the phase noise or phase dispersion is increased through ν or the self-interaction of the bosons χ, respectively, the most robust states become more and more amplitude squeezed. We find scaling laws for these states, and give analytical derivations for them. As the phase diffusion or dispersion becomes so large that the laser output is no longer quantum coherent, the most robust states become so squeezed that they cease to have a well-defined coherent amplitude. That is, the quantum coherence of the laser output is manifest in the most robust PR ensemble being an ensemble of states with a well-defined coherent amplitude. This lends support to our approach of regarding robust PR ensembles as the most natural description of the state of the laser mode. It also has interesting implications for atom lasers in particular

  19. Noise-assisted morphing of memory and logic function

    International Nuclear Information System (INIS)

    Kohar, Vivek; Sinha, Sudeshna

    2012-01-01

    We demonstrate how noise allows a bistable system to behave as a memory device, as well as a logic gate. Namely, in some optimal range of noise, the system can operate flexibly, both as a NAND/AND gate and a Set–Reset latch, by varying an asymmetrizing bias. Thus we show how this system implements memory, even for sub-threshold input signals, using noise constructively to store information. This can lead to the development of reconfigurable devices, that can switch efficiently between memory tasks and logic operations. -- Highlights: ► We consider a nonlinear system in a noisy environment. ► We show that the system can function as a robust memory element. ► Further, the response of the system can be easily morphed from memory to logic operations. ► Such systems can potentially act as building blocks of “smart” computing devices.

  20. Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.

    Science.gov (United States)

    Bailly, Clément; Bodet-Milin, Caroline; Couespel, Solène; Necib, Hatem; Kraeber-Bodéré, Françoise; Ansquer, Catherine; Carlier, Thomas

    2016-01-01

    This study aimed to investigate the variability of textural features (TF) as a function of acquisition and reconstruction parameters within the context of multi-centric trials. The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV) or the absolute deviation (for the noise study) was derived and compared statistically with SUVmax and SUVmean results. The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP). When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE. Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.

  1. Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

    KAUST Repository

    Alfadly, Modar

    2018-01-01

    Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

  2. Analytic Treatment of Deep Neural Networks Under Additive Gaussian Noise

    KAUST Repository

    Alfadly, Modar M.

    2018-04-12

    Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.

  3. Detection of Anomalous Noise Events on Low-Capacity Acoustic Nodes for Dynamic Road Traffic Noise Mapping within an Hybrid WASN

    Directory of Open Access Journals (Sweden)

    Rosa Ma Alsina-Pagès

    2018-04-01

    Full Text Available One of the main aspects affecting the quality of life of people living in urban and suburban areas is the continuous exposure to high road traffic noise (RTN levels. Nowadays, thanks to Wireless Acoustic Sensor Networks (WASN noise in Smart Cities has started to be automatically mapped. To obtain a reliable picture of the RTN, those anomalous noise events (ANE unrelated to road traffic (sirens, horns, people, etc. should be removed from the noise map computation by means of an Anomalous Noise Event Detector (ANED. In Hybrid WASNs, with master-slave architecture, ANED should be implemented in both high-capacity (Hi-Cap and low-capacity (Lo-Cap sensors, following the same principle to obtain consistent results. This work presents an ANED version to run in real-time on μ Controller-based Lo-Cap sensors of a hybrid WASN, discriminating RTN from ANE through their Mel-based spectral energy differences. The experiments, considering 9 h and 8 min of real-life acoustic data from both urban and suburban environments, show the feasibility of the proposal both in terms of computational load and in classification accuracy. Specifically, the ANED Lo-Cap requires around 1 6 of the computational load of the ANED Hi-Cap, while classification accuracies are slightly lower (around 10%. However, preliminary analyses show that these results could be improved in around 4% in the future by means of considering optimal frequency selection.

  4. Detection of Anomalous Noise Events on Low-Capacity Acoustic Nodes for Dynamic Road Traffic Noise Mapping within an Hybrid WASN.

    Science.gov (United States)

    Alsina-Pagès, Rosa Ma; Alías, Francesc; Socoró, Joan Claudi; Orga, Ferran

    2018-04-20

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is the continuous exposure to high road traffic noise (RTN) levels. Nowadays, thanks to Wireless Acoustic Sensor Networks (WASN) noise in Smart Cities has started to be automatically mapped. To obtain a reliable picture of the RTN, those anomalous noise events (ANE) unrelated to road traffic (sirens, horns, people, etc.) should be removed from the noise map computation by means of an Anomalous Noise Event Detector (ANED). In Hybrid WASNs, with master-slave architecture, ANED should be implemented in both high-capacity (Hi-Cap) and low-capacity (Lo-Cap) sensors, following the same principle to obtain consistent results. This work presents an ANED version to run in real-time on μ Controller-based Lo-Cap sensors of a hybrid WASN, discriminating RTN from ANE through their Mel-based spectral energy differences. The experiments, considering 9 h and 8 min of real-life acoustic data from both urban and suburban environments, show the feasibility of the proposal both in terms of computational load and in classification accuracy. Specifically, the ANED Lo-Cap requires around 1 6 of the computational load of the ANED Hi-Cap, while classification accuracies are slightly lower (around 10%). However, preliminary analyses show that these results could be improved in around 4% in the future by means of considering optimal frequency selection.

  5. Automatic picking of the first arrival event using the unwrapped-phase of the Fourier transformed wavefield

    KAUST Repository

    Choi, Yun Seok

    2011-01-01

    First-arrival picking has long suffered from cycle skipping, especially when the first arrival is contaminated with noise or have experienced complex near surface phenomena. We propose a new algorithm for automatic picking of first arrivals using an approach based on unwrapping the phase. We unwrap the phase by taking the derivative of the Fourier-transformed wavefield with respect to the angular frequency and isolate its amplitude component. To do so, we first apply a damping function to the seismic trace, calculate the derivative of the wavefield with respect to the angular frequency, divide the derivative of wavefield by the wavefield itself, and finally take its imaginary part. We compare our derivative approach to the logarithmic one and show that the derivative approach does not suffer from the phase wrapping or cycle-skipping effects. Numerical examples show that our automatic picking algorithm gives convergent and reliable results for the noise-free synthetic data and noisy field data. © 2011 Society of Exploration Geophysicists.

  6. Scale invariant SURF detector and automatic clustering segmentation for infrared small targets detection

    Science.gov (United States)

    Zhang, Haiying; Bai, Jiaojiao; Li, Zhengjie; Liu, Yan; Liu, Kunhong

    2017-06-01

    The detection and discrimination of infrared small dim targets is a challenge in automatic target recognition (ATR), because there is no salient information of size, shape and texture. Many researchers focus on mining more discriminative information of targets in temporal-spatial. However, such information may not be available with the change of imaging environments, and the targets size and intensity keep changing in different imaging distance. So in this paper, we propose a novel research scheme using density-based clustering and backtracking strategy. In this scheme, the speeded up robust feature (SURF) detector is applied to capture candidate targets in single frame at first. And then, these points are mapped into one frame, so that target traces form a local aggregation pattern. In order to isolate the targets from noises, a newly proposed density-based clustering algorithm, fast search and find of density peak (FSFDP for short), is employed to cluster targets by the spatial intensive distribution. Two important factors of the algorithm, percent and γ , are exploited fully to determine the clustering scale automatically, so as to extract the trace with highest clutter suppression ratio. And at the final step, a backtracking algorithm is designed to detect and discriminate target trace as well as to eliminate clutter. The consistence and continuity of the short-time target trajectory in temporal-spatial is incorporated into the bounding function to speed up the pruning. Compared with several state-of-arts methods, our algorithm is more effective for the dim targets with lower signal-to clutter ratio (SCR). Furthermore, it avoids constructing the candidate target trajectory searching space, so its time complexity is limited to a polynomial level. The extensive experimental results show that it has superior performance in probability of detection (Pd) and false alarm suppressing rate aiming at variety of complex backgrounds.

  7. Balanced detection for self-mixing interferometry to improve signal-to-noise ratio

    Science.gov (United States)

    Zhao, Changming; Norgia, Michele; Li, Kun

    2018-01-01

    We apply balanced detection to self-mixing interferometry for displacement and vibration measurement, using two photodiodes for implementing a differential acquisition. The method is based on the phase opposition of the self-mixing signal measured between the two laser diode facet outputs. The balanced signal obtained by enlarging the self-mixing signal, also by canceling of the common-due noises mainly due to disturbances on laser supply and transimpedance amplifier. Experimental results demonstrate the signal-to-noise ratio significantly improves, with almost twice signals enhancement and more than half noise decreasing. This method allows for more robust, longer-distance measurement systems, especially using fringe-counting.

  8. Advances on the automatic estimation of the P-wave onset time

    Directory of Open Access Journals (Sweden)

    Luz García

    2016-09-01

    Full Text Available This work describes the automatic picking of the P-phase arrivals of the 3*106 seismic registers originated during the TOMO-ETNA experiment. Air-gun shots produced by the vessel “Sarmiento de Gamboa” and contemporary passive seismicity occurring in the island are recorded by a dense network of stations deployed for the experiment. In such scenario, automatic processing is needed given: (i the enormous amount of data, (ii the low Signal-to-Noise ratio of many of the available registers and, (iii the accuracy needed for the velocity tomography resulting from the experiment. A preliminary processing is performed with the records obtained from all stations. Raw data formats from the different types of stations are unified, eliminating defective records and reducing noise through filtering in the band of interest for the phase picking. The Advanced Multiband Picking Algorithm (AMPA is then used to process the big database obtained and determine the travel times of the seismic phases. The approach of AMPA, based on frequency multiband denoising and enhancement of expected arrivals through optimum detectors, is detailed together with its calibration and quality assessment procedure. Examples of its usage for active and passive seismic events are presented.

  9. Noise aspects at aerodynamic blade optimisation projects

    International Nuclear Information System (INIS)

    Schepers, J.G.

    1997-06-01

    The Netherlands Energy Research Foundation (ECN) has often been involved in industrial projects, in which blade geometries are created automatic by means of numerical optimisation. Usually, these projects aim at the determination of the aerodynamic optimal wind turbine blade, i.e. the goal is to design a blade which is optimal with regard to energy yield. In other cases, blades have been designed which are optimal with regard to cost of generated energy. However, it is obvious that the wind turbine blade designs which result from these optimisations, are not necessarily optimal with regard to noise emission. In this paper an example is shown of an aerodynamic blade optimisation, using the ECN-program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. 11 figs., 8 refs

  10. Engineering studies related to Skylab program. [assessment of automatic gain control data

    Science.gov (United States)

    Hayne, G. S.

    1973-01-01

    The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.

  11. Sleep disturbance caused by meaningful sounds and the effect of background noise

    Science.gov (United States)

    Namba, Seiichiro; Kuwano, Sonoko; Okamoto, Takehisa

    2004-10-01

    To study noise-induced sleep disturbance, a new procedure called "noise interrupted method"has been developed. The experiment is conducted in the bedroom of the house of each subject. The sounds are reproduced with a mini-disk player which has an automatic reverse function. If the sound is disturbing and subjects cannot sleep, they are allowed to switch off the sound 1 h after they start to try to sleep. This switch off (noise interrupted behavior) is an important index of sleep disturbance. Next morning they fill in a questionnaire in which quality of sleep, disturbance of sounds, the time when they switched off the sound, etc. are asked. The results showed a good relationship between L and the percentages of the subjects who could not sleep in an hour and between L and the disturbance reported in the questionnaire. This suggests that this method is a useful tool to measure the sleep disturbance caused by noise under well-controlled conditions.

  12. Automatic design of digital synthetic gene circuits.

    Directory of Open Access Journals (Sweden)

    Mario A Marchisio

    2011-02-01

    Full Text Available De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input-output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions.

  13. Audio watermarking robust against D/A and A/D conversions

    Directory of Open Access Journals (Sweden)

    Xiang Shijun

    2011-01-01

    Full Text Available Abstract Digital audio watermarking robust against digital-to-analog (D/A and analog-to-digital (A/D conversions is an important issue. In a number of watermark application scenarios, D/A and A/D conversions are involved. In this article, we first investigate the degradation due to DA/AD conversions via sound cards, which can be decomposed into volume change, additional noise, and time-scale modification (TSM. Then, we propose a solution for DA/AD conversions by considering the effect of the volume change, additional noise and TSM. For the volume change, we introduce relation-based watermarking method by modifying groups of the energy relation of three adjacent DWT coefficient sections. For the additional noise, we pick up the lowest-frequency coefficients for watermarking. For the TSM, the synchronization technique (with synchronization codes and an interpolation processing operation is exploited. Simulation tests show the proposed audio watermarking algorithm provides a satisfactory performance to DA/AD conversions and those common audio processing manipulations.

  14. Noise-based frequency offset modulation in wideband frequency-selective fading channels

    NARCIS (Netherlands)

    Meijerink, Arjan; Cotton, S.L.; Bentum, Marinus Jan; Scanlon, W.G.

    2009-01-01

    A frequency offset modulation scheme using wideband noise carriers is considered. The main advantage of such a scheme is that it enables fast receiver synchronization without channel adaptation, while providing robustness to multipath fading and in-band interference. This is important for low-power

  15. Iterative noise removal from temperature and density profiles in the TJ-II Thomson scattering

    International Nuclear Information System (INIS)

    Farias, G.; Dormido-Canto, S.; Vega, J.; Santos, M.; Pastor, I.; Fingerhuth, S.; Ascencio, J.

    2014-01-01

    TJ-II Thomson Scattering diagnostic provides temperature and density profiles of plasma. The CCD camera acquires images that are corrupted with some kind of noise called stray-light. This noise degrades both image contrast and measurement accuracy, which could produce unreliable profiles of the diagnostic. So far, several approaches have been applied in order to decrease the noise in the TJ-II Thomson scattering images. Since the presence of the noise is not global but located in some particular regions of the image, advanced processing techniques are needed. However such methods require of manual fine-tuning of parameters to reach a good performance. In this contribution, an iterative image processing approach is applied in order to reduce the stray light effects in the images of the TJ-II Thomson scattering diagnostic. The proposed solution describes how the noise can be iteratively reduced in the images when a key parameter is automatically adjusted during the iterative process

  16. Iterative noise removal from temperature and density profiles in the TJ-II Thomson scattering

    Energy Technology Data Exchange (ETDEWEB)

    Farias, G., E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Av. Brasil 2147, Valparaíso (Chile); Dormido-Canto, S., E-mail: sebas@dia.uned.es [Departamento de Informática y Automática, UNED, 28040 Madrid (Spain); Vega, J., E-mail: jesus.vega@ciemat.es [Asociación EURATOM/CIEMAT para Fusión, Avd. Complutense 22, 28040 Madrid (Spain); Santos, M., E-mail: msantos@ucm.es [Departamento de Arquitectura de Computadores y Automática, Universidad Complutense de Madrid, 28040 Madrid (Spain); Pastor, I., E-mail: ignacio.pastor@ciemat.es [Asociación EURATOM/CIEMAT para Fusión, Avd. Complutense 22, 28040 Madrid (Spain); Fingerhuth, S., E-mail: sebastian.fingerhuth@ucv.cl [Pontificia Universidad Católica de Valparaíso, Av. Brasil 2147, Valparaíso (Chile); Ascencio, J., E-mail: j_ascencio21@hotmail.com [Pontificia Universidad Católica de Valparaíso, Av. Brasil 2147, Valparaíso (Chile)

    2014-05-15

    TJ-II Thomson Scattering diagnostic provides temperature and density profiles of plasma. The CCD camera acquires images that are corrupted with some kind of noise called stray-light. This noise degrades both image contrast and measurement accuracy, which could produce unreliable profiles of the diagnostic. So far, several approaches have been applied in order to decrease the noise in the TJ-II Thomson scattering images. Since the presence of the noise is not global but located in some particular regions of the image, advanced processing techniques are needed. However such methods require of manual fine-tuning of parameters to reach a good performance. In this contribution, an iterative image processing approach is applied in order to reduce the stray light effects in the images of the TJ-II Thomson scattering diagnostic. The proposed solution describes how the noise can be iteratively reduced in the images when a key parameter is automatically adjusted during the iterative process.

  17. Robust T1-weighted structural brain imaging and morphometry at 7T using MP2RAGE.

    Directory of Open Access Journals (Sweden)

    Kieran R O'Brien

    Full Text Available PURPOSE: To suppress the noise, by sacrificing some of the signal homogeneity for numerical stability, in uniform T1 weighted (T1w images obtained with the magnetization prepared 2 rapid gradient echoes sequence (MP2RAGE and to compare the clinical utility of these robust T1w images against the uniform T1w images. MATERIALS AND METHODS: 8 healthy subjects (29.0 ± 4.1 years; 6 Male, who provided written consent, underwent two scan sessions within a 24 hour period on a 7T head-only scanner. The uniform and robust T1w image volumes were calculated inline on the scanner. Two experienced radiologists qualitatively rated the images for: general image quality; 7T specific artefacts; and, local structure definition. Voxel-based and volume-based morphometry packages were used to compare the segmentation quality between the uniform and robust images. Statistical differences were evaluated by using a positive sided Wilcoxon rank test. RESULTS: The robust image suppresses background noise inside and outside the skull. The inhomogeneity introduced was ranked as mild. The robust image was significantly ranked higher than the uniform image for both observers (observer 1/2, p-value = 0.0006/0.0004. In particular, an improved delineation of the pituitary gland, cerebellar lobes was observed in the robust versus uniform T1w image. The reproducibility of the segmentation results between repeat scans improved (p-value = 0.0004 from an average volumetric difference across structures of ≈ 6.6% to ≈ 2.4% for the uniform image and robust T1w image respectively. CONCLUSIONS: The robust T1w image enables MP2RAGE to produce, clinically familiar T1w images, in addition to T1 maps, which can be readily used in uniform morphometry packages.

  18. ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2017-05-01

    Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.

  19. Retrieving robust noise-based seismic velocity changes from sparse data sets: synthetic tests and application to Klyuchevskoy volcanic group (Kamchatka)

    Science.gov (United States)

    Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N. M.; Droznin, D. V.; Droznina, S. Ya; Senyukov, S. L.; Gordeev, E. I.

    2018-05-01

    Continuous noise-based monitoring of seismic velocity changes provides insights into volcanic unrest, earthquake mechanisms and fluid injection in the sub-surface. The standard monitoring approach relies on measuring travel time changes of late coda arrivals between daily and reference noise cross-correlations, usually chosen as stacks of daily cross-correlations. The main assumption of this method is that the shape of the noise correlations does not change over time or, in other terms, that the ambient-noise sources are stationary through time. These conditions are not fulfilled when a strong episodic source of noise, such as a volcanic tremor for example, perturbs the reconstructed Green's function. In this paper we propose a general formulation for retrieving continuous time series of noise-based seismic velocity changes without the requirement of any arbitrary reference cross-correlation function. Instead, we measure the changes between all possible pairs of daily cross-correlations and invert them using different smoothing parameters to obtain the final velocity change curve. We perform synthetic tests in order to establish a general framework for future applications of this technique. In particular, we study the reliability of velocity change measurements versus the stability of noise cross-correlation functions. We apply this approach to a complex dataset of noise cross-correlations at Klyuchevskoy volcanic group (Kamchatka), hampered by loss of data and the presence of highly non-stationary seismic tremors.

  20. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model

    Energy Technology Data Exchange (ETDEWEB)

    He, Baochun; Huang, Cheng; Zhou, Shoujun; Hu, Qingmao; Jia, Fucang, E-mail: fc.jia@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055 (China); Sharp, Gregory [Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Fang, Chihua; Fan, Yingfang [Department of Hepatology (I), Zhujiang Hospital, Southern Medical University, Guangzhou 510280 (China)

    2016-05-15

    Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach

  1. Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.

    Science.gov (United States)

    He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang

    2016-05-01

    A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver

  2. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  3. Automatic microseismic event picking via unsupervised machine learning

    Science.gov (United States)

    Chen, Yangkang

    2018-01-01

    Effective and efficient arrival picking plays an important role in microseismic and earthquake data processing and imaging. Widely used short-term-average long-term-average ratio (STA/LTA) based arrival picking algorithms suffer from the sensitivity to moderate-to-strong random ambient noise. To make the state-of-the-art arrival picking approaches effective, microseismic data need to be first pre-processed, for example, removing sufficient amount of noise, and second analysed by arrival pickers. To conquer the noise issue in arrival picking for weak microseismic or earthquake event, I leverage the machine learning techniques to help recognizing seismic waveforms in microseismic or earthquake data. Because of the dependency of supervised machine learning algorithm on large volume of well-designed training data, I utilize an unsupervised machine learning algorithm to help cluster the time samples into two groups, that is, waveform points and non-waveform points. The fuzzy clustering algorithm has been demonstrated to be effective for such purpose. A group of synthetic, real microseismic and earthquake data sets with different levels of complexity show that the proposed method is much more robust than the state-of-the-art STA/LTA method in picking microseismic events, even in the case of moderately strong background noise.

  4. Quantum metrology subject to spatially correlated Markovian noise: restoring the Heisenberg limit

    International Nuclear Information System (INIS)

    Jeske, Jan; Cole, Jared H; Huelga, Susana F

    2014-01-01

    Environmental noise can hinder the metrological capabilities of entangled states. While the use of entanglement allows for Heisenberg-limited resolution, the largest permitted by quantum mechanics, deviations from strictly unitary dynamics quickly restore the standard scaling dictated by the central limit theorem. Product and maximally entangled states become asymptotically equivalent when the noisy evolution is both local and strictly Markovian. However, temporal correlations in the noise have been shown to lift this equivalence while fully (spatially) correlated noise allows for the identification of decoherence-free subspaces. Here we analyze precision limits in the presence of noise with finite correlation length and show that there exist robust entangled state preparations which display persistent Heisenberg scaling despite the environmental decoherence, even for small correlation length. Our results emphasize the relevance of noise correlations in the study of quantum advantage and could be relevant beyond metrological applications. (paper)

  5. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  6. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    Science.gov (United States)

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  7. Shift operation control of automatic transmission by {mu}-synthesis; {mu} synthesis ni yoru jido hensokuki no hensoku seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Nagaoka, M; Nishiyama, Y; Nakayama, Y; Kamada, S [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    We have developed a control technology, which the robust control theory is applied, to improve a shift quality of automatic transmission for a passenger car. When applying the robust control theory to the transmission control, many issue arise such as difficulty in system identification and/or the capability of ECU for computing. Recently, We have obtained an exact performance that allows the transmission to be robust controlled with an onboard ECU by improving the system identification process and reducing the model dimensions after the controller design finalized. 6 refs., 7 figs.

  8. Anti-impulse-noise Edge Detection via Anisotropic Morphological Directional Derivatives.

    Science.gov (United States)

    Shui, Peng-Lang; Wang, Fu-Ping

    2017-07-13

    Traditional differential-based edge detection suffers from abrupt degradation in performance when images are corrupted by impulse noises. The morphological operators such as the median filters and weighted median filters possess the intrinsic ability to counteract impulse noise. In this paper, by combining the biwindow configuration with weighted median filters, anisotropic morphological directional derivatives (AMDD) robust to impulse noise are proposed to measure the local grayscale variation around a pixel. For ideal step edges, the AMDD spatial response and directional representation are derived. The characteristics and edge resolution of two kinds of typical biwindows are analyzed thoroughly. In terms of the AMDD spatial response and directional representation of ideal step edges, the spatial matched filter is used to extract the edge strength map (ESM) from the AMDDs of an image. The spatial and directional matched filters are used to extract the edge direction map (EDM). Embedding the extracted ESM and EDM into the standard route of the differential-based edge detection, an anti-impulse-noise AMDD-based edge detector is constructed. It is compared with the existing state-of-the-art detectors on a recognized image dataset for edge detection evaluation. The results show that it attains competitive performance in noise-free and Gaussian noise cases and the best performance in impulse noise cases.

  9. Laser phase and frequency noise measurement by Michelson interferometer composed of a 3 × 3 optical fiber coupler.

    Science.gov (United States)

    Xu, Dan; Yang, Fei; Chen, Dijun; Wei, Fang; Cai, Haiwen; Fang, Zujie; Qu, Ronghui

    2015-08-24

    A laser phase and frequency noise measurement method by an unbalanced Michelson interferometer composed of a 3 × 3 optical fiber coupler is proposed. The relations and differences of the power spectral density (PSD) of differential phase and frequency fluctuation, PSD of instantaneous phase and frequency fluctuation, phase noise and linewidth are derived strictly and discussed carefully. The method obtains the noise features of a narrow linewidth laser conveniently without any specific assumptions or noise models. The technique is also used to characterize the noise features of a narrow linewidth external-cavity semiconductor laser, which confirms the correction and robustness of the method.

  10. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained

  11. Automatic measurement of images on astrometric plates

    Science.gov (United States)

    Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.

    1994-04-01

    We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).

  12. Investigation of neural network paradigms for the development of automatic noise diagnostic/reactor surveillance systems

    International Nuclear Information System (INIS)

    Korsah, K.; Uhrig, R.E.

    1991-01-01

    The use of artificial intelligence (AI) techniques as an aid in the maintenance and operation of nuclear power plant systems has been recognized for the past several years, and several applications using expert systems technology currently exist. The authors investigated the backpropagation paradigm for the recognition of neutron noise power spectral density (PSD) signatures as a possible alternative to current methods based on statistical techniques. The goal is to advance the state of the art in the application of noise analysis techniques to monitor nuclear reactor internals. Continuous surveillance of reactor systems for structural degradation can be quite cost-effective because (1) the loss of mechanical integrity of the reactor internal components can be detected at an early stage before severe damage occurs, (2) unnecessary periodic maintenance can be avoided, (3) plant downtime can be reduced to a minimum, (4) a high level of plant safety can be maintained, and (5) it can be used to help justify the extension of a plant's operating license. The initial objectives were to use neutron noise PSD data from a pressurized water reactor, acquired over a period of ∼2 years by the Oak Ridge National Laboratory (ORNL) Power Spectral Density RECognition (PSDREC) system to develop networks that can (1) differentiate between normal neutron spectral data and anomalous spectral data (e.g., malfunctioning instrumentation); and (2) detect significant shifts in the positions of spectral resonances while reducing the effect of small, random shifts (in neutron noise analysis, shifts in the resonance(s) present in a neutron PSD spectrum are the primary means for diagnosing degradation of reactor internals). 11 refs, 8 figs

  13. Robust distributed cognitive relay beamforming

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2012-05-01

    In this paper, we present a distributed relay beamformer design for a cognitive radio network in which a cognitive (or secondary) transmit node communicates with a secondary receive node assisted by a set of cognitive non-regenerative relays. The secondary nodes share the spectrum with a licensed primary user (PU) node, and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. The proposed robust cognitive relay beamformer design seeks to minimize the total relay transmit power while ensuring that the transceiver signal-to-interference- plus-noise ratio and PU interference constraints are satisfied. The proposed design takes into account a parameter of the error in the channel state information (CSI) to render the performance of the beamformer robust in the presence of imperfect CSI. Though the original problem is non-convex, we show that the proposed design can be reformulated as a tractable convex optimization problem that can be solved efficiently. Numerical results are provided and illustrate the performance of the proposed designs for different network operating conditions and parameters. © 2012 IEEE.

  14. Robust estimation of seismic coda shape

    Science.gov (United States)

    Nikkilä, Mikko; Polishchuk, Valentin; Krasnoshchekov, Dmitry

    2014-04-01

    We present a new method for estimation of seismic coda shape. It falls into the same class of methods as non-parametric shape reconstruction with the use of neural network techniques where data are split into a training and validation data sets. We particularly pursue the well-known problem of image reconstruction formulated in this case as shape isolation in the presence of a broadly defined noise. This combined approach is enabled by the intrinsic feature of seismogram which can be divided objectively into a pre-signal seismic noise with lack of the target shape, and the remainder that contains scattered waveforms compounding the coda shape. In short, we separately apply shape restoration procedure to pre-signal seismic noise and the event record, which provides successful delineation of the coda shape in the form of a smooth almost non-oscillating function of time. The new algorithm uses a recently developed generalization of classical computational-geometry tool of α-shape. The generalization essentially yields robust shape estimation by ignoring locally a number of points treated as extreme values, noise or non-relevant data. Our algorithm is conceptually simple and enables the desired or pre-determined level of shape detail, constrainable by an arbitrary data fit criteria. The proposed tool for coda shape delineation provides an alternative to moving averaging and/or other smoothing techniques frequently used for this purpose. The new algorithm is illustrated with an application to the problem of estimating the coda duration after a local event. The obtained relation coefficient between coda duration and epicentral distance is consistent with the earlier findings in the region of interest.

  15. Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain

    Directory of Open Access Journals (Sweden)

    Shijun Xiang

    2017-01-01

    Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.

  16. Region of interest based robust watermarking scheme for adaptation in small displays

    Science.gov (United States)

    Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar

    2010-02-01

    Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.

  17. Line-robust statistics for continuous gravitational waves: safety in the case of unequal detector sensitivities

    International Nuclear Information System (INIS)

    Keitel, David; Prix, Reinhard

    2015-01-01

    The multi-detector F-statistic is close to optimal for detecting continuous gravitational waves (CWs) in Gaussian noise. However, it is susceptible to false alarms from instrumental artefacts, for example quasi-monochromatic disturbances (‘lines’), which resemble a CW signal more than Gaussian noise. In a recent paper (Keitel et al 2014 Phys. Rev. D 89 064023), a Bayesian model selection approach was used to derive line-robust detection statistics for CW signals, generalizing both the F-statistic and the F-statistic consistency veto technique and yielding improved performance in line-affected data. Here we investigate a generalization of the assumptions made in that paper: if a CW analysis uses data from two or more detectors with very different sensitivities, the line-robust statistics could be less effective. We investigate the boundaries within which they are still safe to use, in comparison with the F-statistic. Tests using synthetic draws show that the optimally-tuned version of the original line-robust statistic remains safe in most cases of practical interest. We also explore a simple idea on further improving the detection power and safety of these statistics, which we, however, find to be of limited practical use. (paper)

  18. Wide-range nuclear reactor temperature control using automatically tuned fuzzy logic controller

    International Nuclear Information System (INIS)

    Ramaswamy, P.; Edwards, R.M.; Lee, K.Y.

    1992-01-01

    In this paper, a fuzzy logic controller design for optimal reactor temperature control is presented. Since fuzzy logic controllers rely on an expert's knowledge of the process, they are hard to optimize. An optimal controller is used in this paper as a reference model, and a Kalman filter is used to automatically determine the rules for the fuzzy logic controller. To demonstrate the robustness of this design, a nonlinear six-delayed-neutron-group plant is controlled using a fuzzy logic controller that utilizes estimated reactor temperatures from a one-delayed-neutron-group observer. The fuzzy logic controller displayed good stability and performance robustness characteristics for a wide range of operation

  19. Two Systems for Automatic Music Genre Recognition

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    We re-implement and test two state-of-the-art systems for automatic music genre classification; but unlike past works in this area, we look closer than ever before at their behavior. First, we look at specific instances where each system consistently applies the same wrong label across multiple...... trials of cross-validation. Second, we test the robustness of each system to spectral equalization. Finally, we test how well human subjects recognize the genres of music excerpts composed by each system to be highly genre representative. Our results suggest that neither high-performing system has...... a capacity to recognize music genre....

  20. Automatic target detection using binary template matching

    Science.gov (United States)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  1. Automatic spectral imaging protocol selection and iterative reconstruction in abdominal CT with reduced contrast agent dose: initial experience.

    Science.gov (United States)

    Lv, Peijie; Liu, Jie; Chai, Yaru; Yan, Xiaopeng; Gao, Jianbo; Dong, Junqiang

    2017-01-01

    To evaluate the feasibility, image quality, and radiation dose of automatic spectral imaging protocol selection (ASIS) and adaptive statistical iterative reconstruction (ASIR) with reduced contrast agent dose in abdominal multiphase CT. One hundred and sixty patients were randomly divided into two scan protocols (n = 80 each; protocol A, 120 kVp/450 mgI/kg, filtered back projection algorithm (FBP); protocol B, spectral CT imaging with ASIS and 40 to 70 keV monochromatic images generated per 300 mgI/kg, ASIR algorithm. Quantitative parameters (image noise and contrast-to-noise ratios [CNRs]) and qualitative visual parameters (image noise, small structures, organ enhancement, and overall image quality) were compared. Monochromatic images at 50 keV and 60 keV provided similar or lower image noise, but higher contrast and overall image quality as compared with 120-kVp images. Despite the higher image noise, 40-keV images showed similar overall image quality compared to 120-kVp images. Radiation dose did not differ between the two protocols, while contrast agent dose in protocol B was reduced by 33 %. Application of ASIR and ASIS to monochromatic imaging from 40 to 60 keV allowed contrast agent dose reduction with adequate image quality and without increasing radiation dose compared to 120 kVp with FBP. • Automatic spectral imaging protocol selection provides appropriate scan protocols. • Abdominal CT is feasible using spectral imaging and 300 mgI/kg contrast agent. • 50-keV monochromatic images with 50 % ASIR provide optimal image quality.

  2. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  3. Robust and transferable quantification of NMR spectral quality using IROC analysis

    Science.gov (United States)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  4. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  5. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  6. A practical exposure-equivalent metric for instrumentation noise in x-ray imaging systems

    International Nuclear Information System (INIS)

    Yadava, G K; Kuhls-Gilcrist, A T; Rudin, S; Patel, V K; Hoffmann, K R; Bednarek, D R

    2008-01-01

    The performance of high-sensitivity x-ray imagers may be limited by additive instrumentation noise rather than by quantum noise when operated at the low exposure rates used in fluoroscopic procedures. The equipment-invasive instrumentation noise measures (in terms of electrons) are generally difficult to make and are potentially not as helpful in clinical practice as would be a direct radiological representation of such noise that may be determined in the field. In this work, we define a clinically relevant representation for instrumentation noise in terms of noise-equivalent detector entrance exposure, termed the instrumentation noise-equivalent exposure (INEE), which can be determined through experimental measurements of noise-variance or signal-to-noise ratio (SNR). The INEE was measured for various detectors, thus demonstrating its usefulness in terms of providing information about the effective operating range of the various detectors. A simulation study is presented to demonstrate the robustness of this metric against post-processing, and its dependence on inherent detector blur. These studies suggest that the INEE may be a practical gauge to determine and compare the range of quantum-limited performance for clinical x-ray detectors of different design, with the implication that detector performance at exposures below the INEE will be instrumentation-noise limited rather than quantum-noise limited

  7. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  8. Robust Sequential Circuits Design Technique for Low Voltage and High Noise Scenarios

    Directory of Open Access Journals (Sweden)

    Garcia-Leyva Lancelot

    2016-01-01

    In this paper we introduce an innovative input and output data redundancy principle for sequential block circuits, the responsible to keep the state of the system, showing its efficiency in front of other robust technique approaches. The methodology is totally different from the Von Neumann approaches, because element are not replicated N times, but instead, they check the coherence of redundant input data no allowing data propagation in case of discrepancy. This mechanism does not require voting devices.

  9. Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.

    Directory of Open Access Journals (Sweden)

    Clément Bailly

    Full Text Available This study aimed to investigate the variability of textural features (TF as a function of acquisition and reconstruction parameters within the context of multi-centric trials.The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV or the absolute deviation (for the noise study was derived and compared statistically with SUVmax and SUVmean results.The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP. When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE.Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.

  10. Influence of musical training on understanding voiced and whispered speech in noise.

    Science.gov (United States)

    Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J

    2014-01-01

    This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.

  11. A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition

    Directory of Open Access Journals (Sweden)

    L. Laur

    2015-12-01

    Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.

  12. Noise suppression by noise

    OpenAIRE

    Vilar, J. M. G. (José M. G.), 1972-; Rubí Capaceti, José Miguel

    2001-01-01

    We have analyzed the interplay between an externally added noise and the intrinsic noise of systems that relax fast towards a stationary state, and found that increasing the intensity of the external noise can reduce the total noise of the system. We have established a general criterion for the appearance of this phenomenon and discussed two examples in detail.

  13. Fault tolerant deterministic secure quantum communication using logical Bell states against collective noise

    International Nuclear Information System (INIS)

    Wang Chao; Liu Jian-Wei; Shang Tao; Chen Xiu-Bo; Bi Ya-Gang

    2015-01-01

    This study proposes two novel fault tolerant deterministic secure quantum communication (DSQC) schemes resistant to collective noise using logical Bell states. Either DSQC scheme is constructed based on a new coding function, which is designed by exploiting the property of the corresponding logical Bell states immune to collective-dephasing noise and collective-rotation noise, respectively. The secret message can be encoded by two simple unitary operations and decoded by merely performing Bell measurements, which can make the proposed scheme more convenient in practical applications. Moreover, the strategy of one-step quanta transmission, together with the technique of decoy logical qubits checking not only reduces the influence of other noise existing in a quantum channel, but also guarantees the security of the communication between two legitimate users. The final analysis shows that the proposed schemes are feasible and robust against various well-known attacks over the collective noise channel. (paper)

  14. A Modified LQG Algorithm (MLQG for Robust Control of Nonlinear Multivariable Systems

    Directory of Open Access Journals (Sweden)

    Jens G. Balchen

    1993-07-01

    Full Text Available The original LQG algorithm is often characterized for its lack of robustness. This is because in the design of the estimator (Kalman filter the process disturbance is assumed to be white noise. If the estimator is to give good estimates, the Kalman gain is increased which means that the estimator fails to become robust. A solution to this problem is to replace the proportional Kalman gain matrix by a dynamic PI algorithm and the proportional LQ feedback gain matrix by a PI algorithm. A tuning method is developed which facilitates the tuning of a modified LQG control system (MLQG by only two tuning parameters.

  15. Seismic noise level variation in South Korea

    Science.gov (United States)

    Sheen, D.; Shin, J.

    2008-12-01

    The variations of seismic background noise in South Korea have been investigated by means of power spectral analysis. The Korea Institute of Geoscience and Mineral Resources (KIGAM) and the Korea Meteorological Administation (KMA) have national wide seismic networks in South Korea, and, in the end of 2007, there are 30 broadband stations which have been operating for more than a year. In this study, we have estimated the power spectral density of seismic noise for 30 broadband stations from 2005 to 2007. Since we estimate PSDs from a large dataset of continuous waveform in this study, a robust PSD estimate of McNamara and Buland (2004) is used. In the frequency range 1-5 Hz, the diurnal variations of noise are observed at most of stations, which are especially larger at coastal stations and at insular than at inland. Some stations shows daily difference of diurnal variations, which represents that cultural activities contribute to the noise level of a station. The variation of number of triggered stations, however, shows that cultural noise has little influence on the detection capability of seismic network in South Korea. Seasonal variations are observed well in the range 0.1-0.5 Hz, while much less found in the frequency range 1-5 Hz. We observed that strong peaks in the range 0.1-0.5 Hz occur at the summer when Pacific typhoons are close to the Korean Peninsula.

  16. Automatic adjustment of bias current for direct current superconducting quantum interference device

    International Nuclear Information System (INIS)

    Makie-Fukuda, K.; Hotta, M.; Okajima, K.; Kado, H.

    1993-01-01

    A new method of adjusting the bias current of dc superconducting quantum interference device (SQUID) is described. It is shown that the signal-to-noise ratio of a SQUID magnetometer connected in a flux-locked loop configuration is proportional to the second harmonic of the output signal from the SQUID. A circuit configuration that can automatically optimize a SQUID's bias current by measuring this second harmonic and adjusting the bias current accordingly is proposed

  17. Robust Contextual Bandit via the Capped-$\\ell_{2}$ norm

    OpenAIRE

    Zhu, Feiyun; Zhu, Xinliang; Wang, Sheng; Yao, Jiawen; Huang, Junzhou

    2017-01-01

    This paper considers the actor-critic contextual bandit for the mobile health (mHealth) intervention. The state-of-the-art decision-making methods in mHealth generally assume that the noise in the dynamic system follows the Gaussian distribution. Those methods use the least-square-based algorithm to estimate the expected reward, which is prone to the existence of outliers. To deal with the issue of outliers, we propose a novel robust actor-critic contextual bandit method for the mHealth inter...

  18. Synchronisation of networked Kuramoto oscillators under stable Lévy noise

    Science.gov (United States)

    Kalloniatis, Alexander C.; Roberts, Dale O.

    2017-01-01

    We study the Kuramoto model on several classes of network topologies examining the dynamics under the influence of Lévy noise. Such noise exhibits heavier tails than Gaussian and allows us to understand how 'shocks' influence the individual oscillator and collective system behaviour. Skewed α-stable Lévy noise, equivalent to fractional diffusion perturbations, are considered. We perform numerical simulations for Erdős-Rényi (ER) and Barabási-Albert (BA) scale free networks of size N = 1000 while varying the Lévy index α for the noise. We find that synchrony now assumes a surprising variety of forms, not seen for Gaussian-type noise, and changing with α: a noise-generated drift, a smooth α dependence of the point of cross-over of ER and BA networks in the degree of synchronisation, and a severe loss of synchronisation at low values of α. We also show that this robustness of the BA network across most values of α can also be understood as a consequence of the Laplacian of the graph working within the fractional Fokker-Planck equation of the linearised system, close to synchrony, with both eigenvalues and eigenvectors alternately contributing in different regimes of α.

  19. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  20. Advances in Modal Analysis Using a Robust and Multiscale Method

    Science.gov (United States)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  1. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  2. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    Science.gov (United States)

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Automatic Delineation of On-Line Head-And-Neck Computed Tomography Images: Toward On-Line Adaptive Radiotherapy

    International Nuclear Information System (INIS)

    Zhang Tiezhi; Chi Yuwei; Meldolesi, Elisa; Yan Di

    2007-01-01

    Purpose: To develop and validate a fully automatic region-of-interest (ROI) delineation method for on-line adaptive radiotherapy. Methods and Materials: On-line adaptive radiotherapy requires a robust and automatic image segmentation method to delineate ROIs in on-line volumetric images. We have implemented an atlas-based image segmentation method to automatically delineate ROIs of head-and-neck helical computed tomography images. A total of 32 daily computed tomography images from 7 head-and-neck patients were delineated using this automatic image segmentation method. Manually drawn contours on the daily images were used as references in the evaluation of automatically delineated ROIs. Two methods were used in quantitative validation: (1) the dice similarity coefficient index, which indicates the overlapping ratio between the manually and automatically delineated ROIs; and (2) the distance transformation, which yields the distances between the manually and automatically delineated ROI surfaces. Results: Automatic segmentation showed agreement with manual contouring. For most ROIs, the dice similarity coefficient indexes were approximately 0.8. Similarly, the distance transformation evaluation results showed that the distances between the manually and automatically delineated ROI surfaces were mostly within 3 mm. The distances between two surfaces had a mean of 1 mm and standard deviation of <2 mm in most ROIs. Conclusion: With atlas-based image segmentation, it is feasible to automatically delineate ROIs on the head-and-neck helical computed tomography images in on-line adaptive treatments

  4. Variational Bayesian labeled multi-Bernoulli filter with unknown sensor noise statistics

    Directory of Open Access Journals (Sweden)

    Qiu Hao

    2016-10-01

    Full Text Available It is difficult to build accurate model for measurement noise covariance in complex backgrounds. For the scenarios of unknown sensor noise variances, an adaptive multi-target tracking algorithm based on labeled random finite set and variational Bayesian (VB approximation is proposed. The variational approximation technique is introduced to the labeled multi-Bernoulli (LMB filter to jointly estimate the states of targets and sensor noise variances. Simulation results show that the proposed method can give unbiased estimation of cardinality and has better performance than the VB probability hypothesis density (VB-PHD filter and the VB cardinality balanced multi-target multi-Bernoulli (VB-CBMeMBer filter in harsh situations. The simulations also confirm the robustness of the proposed method against the time-varying noise variances. The computational complexity of proposed method is higher than the VB-PHD and VB-CBMeMBer in extreme cases, while the mean execution times of the three methods are close when targets are well separated.

  5. A new method for robust video watermarking resistant against key estimation attacks

    Science.gov (United States)

    Mitekin, Vitaly

    2015-12-01

    This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.

  6. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    Science.gov (United States)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  7. Automatic Image Alignment and Stitching of Medical Images with Seam Blending

    OpenAIRE

    Abhinav Kumar; Raja Sekhar Bandaru; B Madhusudan Rao; Saket Kulkarni; Nilesh Ghatpande

    2010-01-01

    This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together usin...

  8. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  9. Spoken language achieves robustness and evolvability by exploiting degeneracy and neutrality.

    Science.gov (United States)

    Winter, Bodo

    2014-10-01

    As with biological systems, spoken languages are strikingly robust against perturbations. This paper shows that languages achieve robustness in a way that is highly similar to many biological systems. For example, speech sounds are encoded via multiple acoustically diverse, temporally distributed and functionally redundant cues, characteristics that bear similarities to what biologists call "degeneracy". Speech is furthermore adequately characterized by neutrality, with many different tongue configurations leading to similar acoustic outputs, and different acoustic variants understood as the same by recipients. This highlights the presence of a large neutral network of acoustic neighbors for every speech sound. Such neutrality ensures that a steady backdrop of variation can be maintained without impeding communication, assuring that there is "fodder" for subsequent evolution. Thus, studying linguistic robustness is not only important for understanding how linguistic systems maintain their functioning upon the background of noise, but also for understanding the preconditions for language evolution. © 2014 WILEY Periodicals, Inc.

  10. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  11. Dynamics of double-well Bose–Einstein condensates subject to external Gaussian white noise

    International Nuclear Information System (INIS)

    Zheng Hanlei; Hao Yajiang; Gu Qiang

    2013-01-01

    Dynamical properties of the Bose–Einstein condensate in a double-well potential subject to Gaussian white noise are investigated by numerically solving the time-dependent Gross–Pitaevskii equation. The Gaussian white noise is used to describe influence of the random environmental disturbance on the double-well condensate. Dynamical evolutions from three different initial states, the Josephson oscillation state, the running phase and π-mode macroscopic quantum self-trapping states, are considered. It is shown that the system is rather robust with respect to the weak noise whose strength is small and change rate is high. If the evolution time is sufficiently long, the weak noise will finally drive the system to evolve from high-energy states to low-energy states, but in a manner rather different from the energy-dissipation effect. In the presence of strong noise with either large strength or slow change rate, the double-well condensate may exhibit very irregular dynamical behaviours. (paper)

  12. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    Science.gov (United States)

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  13. Towards automatic patient positioning and scan planning using continuously moving table MR imaging.

    Science.gov (United States)

    Koken, Peter; Dries, Sebastian P M; Keupp, Jochen; Bystrov, Daniel; Pekar, Vladimir; Börnert, Peter

    2009-10-01

    A concept is proposed to simplify patient positioning and scan planning to improve ease of use and workflow in MR. After patient preparation in front of the scanner the operator selects the anatomy of interest by a single push-button action. Subsequently, the patient table is moved automatically into the scanner, while real-time 3D isotropic low-resolution continuously moving table scout scanning is performed using patient-independent MR system settings. With a real-time organ identification process running in parallel and steering the scanner, the target anatomy can be positioned fully automatically in the scanner's sensitive volume. The desired diagnostic examination of the anatomy of interest can be planned and continued immediately using the geometric information derived from the acquired 3D data. The concept was implemented and successfully tested in vivo in 12 healthy volunteers, focusing on the liver as the target anatomy. The positioning accuracy achieved was on the order of several millimeters, which turned out to be sufficient for initial planning purposes. Furthermore, the impact of nonoptimal system settings on the positioning performance, the signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) was investigated. The present work proved the basic concept of the proposed approach as an element of future scan automation. (c) 2009 Wiley-Liss, Inc.

  14. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    Directory of Open Access Journals (Sweden)

    Jaw-Chyng Lue

    2008-01-01

    Full Text Available A compact integrated system-on-chip (SoC architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function.

  15. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  16. Acoustic topological insulator and robust one-way sound transport

    Science.gov (United States)

    He, Cheng; Ni, Xu; Ge, Hao; Sun, Xiao-Chen; Chen, Yan-Bin; Lu, Ming-Hui; Liu, Xiao-Ping; Chen, Yan-Feng

    2016-12-01

    Topological design of materials enables topological symmetries and facilitates unique backscattering-immune wave transport. In airborne acoustics, however, the intrinsic longitudinal nature of sound polarization makes the use of the conventional spin-orbital interaction mechanism impossible for achieving band inversion. The topological gauge flux is then typically introduced with a moving background in theoretical models. Its practical implementation is a serious challenge, though, due to inherent dynamic instabilities and noise. Here we realize the inversion of acoustic energy bands at a double Dirac cone and provide an experimental demonstration of an acoustic topological insulator. By manipulating the hopping interaction of neighbouring ’atoms’ in this new topological material, we successfully demonstrate the acoustic quantum spin Hall effect, characterized by robust pseudospin-dependent one-way edge sound transport. Our results are promising for the exploration of new routes for experimentally studying topological phenomena and related applications, for example, sound-noise reduction.

  17. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  18. WHO Environmental Noise Guidelines for the European Region: A Systematic Review on Environmental Noise and Adverse Birth Outcomes.

    Science.gov (United States)

    Nieuwenhuijsen, Mark J; Ristovska, Gordana; Dadvand, Payam

    2017-10-19

    not conduct meta-analyses. Discussion: This systematic review is supported by previous systematic reviews and meta-analyses that suggested that there may be some suggestive evidence for an association between environmental noise exposure and birth outcomes, although they pointed more generally to a stronger role of occupational noise exposure, which tends to be higher and last longer. Very strict criteria for inclusion and exclusion of studies, performance of quality assessment for risk of bias, and finally applying GRADE principles for judgment of quality of evidence are the strengths of this review. We found evidence of very low quality for associations between aircraft noise and preterm birth, low birth weight and congenital anomalies, and low quality evidence for an association between road traffic noise and low birth weight, preterm birth and small for gestational age. Further high quality studies are required to establish such associations. Future studies are recommended to apply robust exposure assessment methods (e.g., modeled or measured noise levels at bedroom façade), disentangle associations for different sources of noise as well as daytime and nighttime noise, evaluate the impacts of noise evens (that stand out of the noise background), and control the analyses for confounding factors, such as socioeconomic status, lifestyle factors and other environmental factors, especially air pollution.

  19. WHO Environmental Noise Guidelines for the European Region: A Systematic Review on Environmental Noise and Adverse Birth Outcomes

    Directory of Open Access Journals (Sweden)

    Mark J. Nieuwenhuijsen

    2017-10-01

    studies, we did not conduct meta-analyses. Discussion: This systematic review is supported by previous systematic reviews and meta-analyses that suggested that there may be some suggestive evidence for an association between environmental noise exposure and birth outcomes, although they pointed more generally to a stronger role of occupational noise exposure, which tends to be higher and last longer. Very strict criteria for inclusion and exclusion of studies, performance of quality assessment for risk of bias, and finally applying GRADE principles for judgment of quality of evidence are the strengths of this review. Conclusions: We found evidence of very low quality for associations between aircraft noise and preterm birth, low birth weight and congenital anomalies, and low quality evidence for an association between road traffic noise and low birth weight, preterm birth and small for gestational age. Further high quality studies are required to establish such associations. Future studies are recommended to apply robust exposure assessment methods (e.g., modeled or measured noise levels at bedroom façade, disentangle associations for different sources of noise as well as daytime and nighttime noise, evaluate the impacts of noise evens (that stand out of the noise background, and control the analyses for confounding factors, such as socioeconomic status, lifestyle factors and other environmental factors, especially air pollution.

  20. Automatic analog IC sizing and optimization constrained with PVT corners and layout effects

    CERN Document Server

    Lourenço, Nuno; Horta, Nuno

    2017-01-01

    This book introduces readers to a variety of tools for automatic analog integrated circuit (IC) sizing and optimization. The authors provide a historical perspective on the early methods proposed to tackle automatic analog circuit sizing, with emphasis on the methodologies to size and optimize the circuit, and on the methodologies to estimate the circuit’s performance. The discussion also includes robust circuit design and optimization and the most recent advances in layout-aware analog sizing approaches. The authors describe a methodology for an automatic flow for analog IC design, including details of the inputs and interfaces, multi-objective optimization techniques, and the enhancements made in the base implementation by using machine leaning techniques. The Gradient model is discussed in detail, along with the methods to include layout effects in the circuit sizing. The concepts and algorithms of all the modules are thoroughly described, enabling readers to reproduce the methodologies, improve the qual...

  1. A Robust Image Watermarking in the Joint Time-Frequency Domain

    Directory of Open Access Journals (Sweden)

    Yalçın Çekiç

    2010-01-01

    Full Text Available With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF representation is presented. We use the discrete evolutionary transform (DET calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.

  2. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  3. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  4. Advances in Modal Analysis Using a Robust and Multiscale Method

    Directory of Open Access Journals (Sweden)

    Frisson Christian

    2010-01-01

    Full Text Available Abstract This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  5. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  6. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    Science.gov (United States)

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  7. A simple parameter can switch between different weak-noise-induced phenomena in a simple neuron model

    Science.gov (United States)

    Yamakou, Marius E.; Jost, Jürgen

    2017-10-01

    In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.

  8. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    Science.gov (United States)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  9. Statistical classification of road pavements using near field vehicle rolling noise measurements.

    Science.gov (United States)

    Paulo, Joel Preto; Coelho, J L Bento; Figueiredo, Mário A T

    2010-10-01

    Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.

  10. Dosimetric Evaluation of Automatic Segmentation for Adaptive IMRT for Head-and-Neck Cancer

    International Nuclear Information System (INIS)

    Tsuji, Stuart Y.; Hwang, Andrew; Weinberg, Vivian; Yom, Sue S.; Quivey, Jeanne M.; Xia Ping

    2010-01-01

    Purpose: Adaptive planning to accommodate anatomic changes during treatment requires repeat segmentation. This study uses dosimetric endpoints to assess automatically deformed contours. Methods and Materials: Sixteen patients with head-and-neck cancer had adaptive plans because of anatomic change during radiotherapy. Contours from the initial planning computed tomography (CT) were deformed to the mid-treatment CT using an intensity-based free-form registration algorithm then compared with the manually drawn contours for the same CT using the Dice similarity coefficient and an overlap index. The automatic contours were used to create new adaptive plans. The original and automatic adaptive plans were compared based on dosimetric outcomes of the manual contours and on plan conformality. Results: Volumes from the manual and automatic segmentation were similar; only the gross tumor volume (GTV) was significantly different. Automatic plans achieved lower mean coverage for the GTV: V95: 98.6 ± 1.9% vs. 89.9 ± 10.1% (p = 0.004) and clinical target volume: V95: 98.4 ± 0.8% vs. 89.8 ± 6.2% (p 3 of the spinal cord 39.9 ± 3.7 Gy vs. 42.8 ± 5.4 Gy (p = 0.034), but no difference for the remaining structures. Conclusions: Automatic segmentation is not robust enough to substitute for physician-drawn volumes, particularly for the GTV. However, it generates normal structure contours of sufficient accuracy when assessed by dosimetric end points.

  11. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  12. Robust Grid-Current-Feedback Resonance Suppression Method for LCL-Type Grid-Connected Inverter Connected to Weak Grid

    DEFF Research Database (Denmark)

    Zhou, Xiaoping; Zhou, Leming; Chen, Yandong

    2018-01-01

    In this paper, a robust grid-current-feedback reso-nance suppression (GCFRS) method for LCL-type grid-connected inverter is proposed to enhance the system damping without introducing the switching noise and eliminate the impact of control delay on system robustness against grid-impedance variation....... It is composed of GCFRS method, the full duty-ratio and zero-beat-lag PWM method, and the lead-grid-current-feedback-resonance-suppression (LGCFRS) method. Firstly, the GCFRS is used to suppress the LCL-resonant peak well and avoid introducing the switching noise. Secondly, the proposed full duty-ratio and zero......-beat-lag PWM method is used to elimi-nate the one-beat-lag computation delay without introducing duty cycle limitations. Moreover, it can also realize the smooth switching from positive to negative half-wave of the grid current and improve the waveform quality. Thirdly, the proposed LGCFRS is used to further...

  13. Effects of background noise on total noise annoyance

    Science.gov (United States)

    Willshire, K. F.

    1987-01-01

    Two experiments were conducted to assess the effects of combined community noise sources on annoyance. The first experiment baseline relationships between annoyance and noise level for three community noise sources (jet aircraft flyovers, traffic and air conditioners) presented individually. Forty eight subjects evaluated the annoyance of each noise source presented at four different noise levels. Results indicated the slope of the linear relationship between annoyance and noise level for the traffic noise was significantly different from that of aircraft and of air conditioner noise, which had equal slopes. The second experiment investigated annoyance response to combined noise sources, with aircraft noise defined as the major noise source and traffic and air conditioner noise as background noise sources. Effects on annoyance of noise level differences between aircraft and background noise for three total noise levels and for both background noise sources were determined. A total of 216 subjects were required to make either total or source specific annoyance judgements, or a combination of the two, for a wide range of combined noise conditions.

  14. Automatic control design procedures for restructurable aircraft control

    Science.gov (United States)

    Looze, D. P.; Krolewski, S.; Weiss, J.; Barrett, N.; Eterno, J.

    1985-01-01

    A simple, reliable automatic redesign procedure for restructurable control is discussed. This procedure is based on Linear Quadratic (LQ) design methodologies. It employs a robust control system design for the unfailed aircraft to minimize the effects of failed surfaces and to extend the time available for restructuring the Flight Control System. The procedure uses the LQ design parameters for the unfailed system as a basis for choosing the design parameters of the failed system. This philosophy alloys the engineering trade-offs that were present in the nominal design to the inherited by the restructurable design. In particular, it alloys bandwidth limitations and performance trade-offs to be incorporated in the redesigned system. The procedure also has several other desirable features. It effectively redistributes authority among the available control effectors to maximize the system performance subject to actuator limitations and constraints. It provides a graceful performance degradation as the amount of control authority lessens. When given the parameters of the unfailed aircraft, the automatic redesign procedure reproduces the nominal control system design.

  15. Subcutaneous Tissue Thickness is an Independent Predictor of Image Noise in Cardiac CT

    Energy Technology Data Exchange (ETDEWEB)

    Staniak, Henrique Lane; Sharovsky, Rodolfo [Hospital Universitário - Universidade de São Paulo, São Paulo, SP (Brazil); Pereira, Alexandre Costa [Hospital das Clínicas - Universidade de São Paulo, São Paulo, SP (Brazil); Castro, Cláudio Campi de; Benseñor, Isabela M.; Lotufo, Paulo A. [Hospital Universitário - Universidade de São Paulo, São Paulo, SP (Brazil); Faculdade de Medicina - Universidade de São Paulo, São Paulo, SP (Brazil); Bittencourt, Márcio Sommer, E-mail: msbittencourt@mail.harvard.edu [Hospital Universitário - Universidade de São Paulo, São Paulo, SP (Brazil)

    2014-01-15

    Few data on the definition of simple robust parameters to predict image noise in cardiac computed tomography (CT) exist. To evaluate the value of a simple measure of subcutaneous tissue as a predictor of image noise in cardiac CT. 86 patients underwent prospective ECG-gated coronary computed tomographic angiography (CTA) and coronary calcium scoring (CAC) with 120 kV and 150 mA. The image quality was objectively measured by the image noise in the aorta in the cardiac CTA, and low noise was defined as noise < 30HU. The chest anteroposterior diameter and lateral width, the image noise in the aorta and the skin-sternum (SS) thickness were measured as predictors of cardiac CTA noise. The association of the predictors and image noise was performed by using Pearson correlation. The mean radiation dose was 3.5 ± 1.5 mSv. The mean image noise in CT was 36.3 ± 8.5 HU, and the mean image noise in non-contrast scan was 17.7 ± 4.4 HU. All predictors were independently associated with cardiac CTA noise. The best predictors were SS thickness, with a correlation of 0.70 (p < 0.001), and noise in the non-contrast images, with a correlation of 0.73 (p < 0.001). When evaluating the ability to predict low image noise, the areas under the ROC curve for the non-contrast noise and for the SS thickness were 0.837 and 0.864, respectively. Both SS thickness and CAC noise are simple accurate predictors of cardiac CTA image noise. Those parameters can be incorporated in standard CT protocols to adequately adjust radiation exposure.

  16. Subcutaneous Tissue Thickness is an Independent Predictor of Image Noise in Cardiac CT

    International Nuclear Information System (INIS)

    Staniak, Henrique Lane; Sharovsky, Rodolfo; Pereira, Alexandre Costa; Castro, Cláudio Campi de; Benseñor, Isabela M.; Lotufo, Paulo A.; Bittencourt, Márcio Sommer

    2014-01-01

    Few data on the definition of simple robust parameters to predict image noise in cardiac computed tomography (CT) exist. To evaluate the value of a simple measure of subcutaneous tissue as a predictor of image noise in cardiac CT. 86 patients underwent prospective ECG-gated coronary computed tomographic angiography (CTA) and coronary calcium scoring (CAC) with 120 kV and 150 mA. The image quality was objectively measured by the image noise in the aorta in the cardiac CTA, and low noise was defined as noise < 30HU. The chest anteroposterior diameter and lateral width, the image noise in the aorta and the skin-sternum (SS) thickness were measured as predictors of cardiac CTA noise. The association of the predictors and image noise was performed by using Pearson correlation. The mean radiation dose was 3.5 ± 1.5 mSv. The mean image noise in CT was 36.3 ± 8.5 HU, and the mean image noise in non-contrast scan was 17.7 ± 4.4 HU. All predictors were independently associated with cardiac CTA noise. The best predictors were SS thickness, with a correlation of 0.70 (p < 0.001), and noise in the non-contrast images, with a correlation of 0.73 (p < 0.001). When evaluating the ability to predict low image noise, the areas under the ROC curve for the non-contrast noise and for the SS thickness were 0.837 and 0.864, respectively. Both SS thickness and CAC noise are simple accurate predictors of cardiac CTA image noise. Those parameters can be incorporated in standard CT protocols to adequately adjust radiation exposure

  17. Maximizing noise energy for noise-masking studies.

    Science.gov (United States)

    Jules Étienne, Cédric; Arleo, Angelo; Allard, Rémy

    2017-08-01

    Noise-masking experiments are widely used to investigate visual functions. To be useful, noise generally needs to be strong enough to noticeably impair performance, but under some conditions, noise does not impair performance even when its contrast approaches the maximal displayable limit of 100 %. To extend the usefulness of noise-masking paradigms over a wider range of conditions, the present study developed a noise with great masking strength. There are two typical ways of increasing masking strength without exceeding the limited contrast range: use binary noise instead of Gaussian noise or filter out frequencies that are not relevant to the task (i.e., which can be removed without affecting performance). The present study combined these two approaches to further increase masking strength. We show that binarizing the noise after the filtering process substantially increases the energy at frequencies within the pass-band of the filter given equated total contrast ranges. A validation experiment showed that similar performances were obtained using binarized-filtered noise and filtered noise (given equated noise energy at the frequencies within the pass-band) suggesting that the binarization operation, which substantially reduced the contrast range, had no significant impact on performance. We conclude that binarized-filtered noise (and more generally, truncated-filtered noise) can substantially increase the energy of the noise at frequencies within the pass-band. Thus, given a limited contrast range, binarized-filtered noise can display higher energy levels than Gaussian noise and thereby widen the range of conditions over which noise-masking paradigms can be useful.

  18. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  19. Correlated Noise: How it Breaks NMF, and What to Do About It.

    Science.gov (United States)

    Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D

    2011-01-12

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.

  20. Intelligent Noise Removal from EMG Signal Using Focused Time-Lagged Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    S. N. Kale

    2009-01-01

    Full Text Available Electromyography (EMG signals can be used for clinical/biomedical application and modern human computer interaction. EMG signals acquire noise while traveling through tissue, inherent noise in electronics equipment, ambient noise, and so forth. ANN approach is studied for reduction of noise in EMG signal. In this paper, it is shown that Focused Time-Lagged Recurrent Neural Network (FTLRNN can elegantly solve to reduce the noise from EMG signal. After rigorous computer simulations, authors developed an optimal FTLRNN model, which removes the noise from the EMG signal. Results show that the proposed optimal FTLRNN model has an MSE (Mean Square Error as low as 0.000067 and 0.000048, correlation coefficient as high as 0.99950 and 0.99939 for noise signal and EMG signal, respectively, when validated on the test dataset. It is also noticed that the output of the estimated FTLRNN model closely follows the real one. This network is indeed robust as EMG signal tolerates the noise variance from 0.1 to 0.4 for uniform noise and 0.30 for Gaussian noise. It is clear that the training of the network is independent of specific partitioning of dataset. It is seen that the performance of the proposed FTLRNN model clearly outperforms the best Multilayer perceptron (MLP and Radial Basis Function NN (RBF models. The simple NN model such as the FTLRNN with single-hidden layer can be employed to remove noise from EMG signal.