WorldWideScience

Sample records for source wavelet estimation

  1. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Source location in plates based on the multiple sensors array method and wavelet analysis

    International Nuclear Information System (INIS)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon

    2014-01-01

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  3. Source location in plates based on the multiple sensors array method and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  4. A feasibility study on wavelet transform for reactivity coefficient estimation

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2000-01-01

    Recently, a new method using Fourier transform has been introduced in place of the conventional method in order to reduce the time required for the measurement of moderator temperature coefficient in domestic PWRs. The basic concept of these methods is to eliminate noise in the reactivity signal. From this point of view, wavelet analysis is also known as an effective method. In this paper, we tried to apply this method to estimate reactivity coefficients of a nuclear reactor. The basic idea of the reactivity coefficient estimation is to analyze the ratios themselves of the corresponding expansion coefficients of the wavelet transform of the signals of reactivity and the relevant parameter. The concept requires no inverse wavelet transform. Based on numerical simulations, it is found that the method can reasonably estimate reactivity coefficient, for example moderator temperature coefficient, with less length of time sequence data than those required for Fourier transform method. We will continue this study to examine the validity of the estimation procedure for the actual reactor data and further to estimate the other reactivity coefficients. (author)

  5. Blank Field Sources in the ROSAT HRI Brera Multiscale Wavelet catalog

    OpenAIRE

    Chieregato, M.; Campana, S.; Treves, A.; Moretti, A.; Mignani, R. P.; Tagliaferri, G.

    2005-01-01

    The search for Blank Field Sources (BFS), i.e. X-ray sources without optical counterparts, paves the way to the identification of unusual objects in the X-ray sky. Here we present four BFS detected in the Brera Multiscale Wavelet catalog of ROSAT HRI observations. This sample has been selected on the basis of source brightness, distance from possible counterparts at other wavelengths, point-like shape and good estimate of the X-ray flux (f_X). The observed f_X and the limiting magnitude of th...

  6. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  7. Wavelet-Based Methodology for Evolutionary Spectra Estimation of Nonstationary Typhoon Processes

    Directory of Open Access Journals (Sweden)

    Guang-Dong Zhou

    2015-01-01

    Full Text Available Closed-form expressions are proposed to estimate the evolutionary power spectral density (EPSD of nonstationary typhoon processes by employing the wavelet transform. Relying on the definition of the EPSD and the concept of the wavelet transform, wavelet coefficients of a nonstationary typhoon process at a certain time instant are interpreted as the Fourier transform of a new nonstationary oscillatory process, whose modulating function is equal to the modulating function of the nonstationary typhoon process multiplied by the wavelet function in time domain. Then, the EPSD of nonstationary typhoon processes is deduced in a closed form and is formulated as a weighted sum of the squared moduli of time-dependent wavelet functions. The weighted coefficients are frequency-dependent functions defined by the wavelet coefficients of the nonstationary typhoon process and the overlapping area of two shifted wavelets. Compared with the EPSD, defined by a sum of the squared moduli of the wavelets in frequency domain in literature, this paper provides an EPSD estimation method in time domain. The theoretical results are verified by uniformly modulated nonstationary typhoon processes and non-uniformly modulated nonstationary typhoon processes.

  8. Selection of the wavelet function for the frequencies estimation

    International Nuclear Information System (INIS)

    Garcia R, A.

    2007-01-01

    At the moment the signals are used to diagnose the state of the systems, by means of the extraction of their more important characteristics such as the frequencies, tendencies, changes and temporary evolutions. This characteristics are detected by means of diverse analysis techniques, as Autoregressive methods, Fourier Transformation, Fourier transformation in short time, Wavelet transformation, among others. The present work uses the one Wavelet transformation because it allows to analyze stationary, quasi-stationary and transitory signals in the time-frequency plane. It also describes a methodology to select the scales and the Wavelet function to be applied the one Wavelet transformation with the objective of detecting to the dominant system frequencies. (Author)

  9. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  10. Wavelet denoising method; application to the flow rate estimation for water level control

    International Nuclear Information System (INIS)

    Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun

    2003-01-01

    The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers

  11. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    Science.gov (United States)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  12. Finite Sample Comparison of Parametric, Semiparametric, and Wavelet Estimators of Fractional Integration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ø.; Frederiksen, Per Houmann

    2005-01-01

    In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods. The es...... the time domain parametric methods, and (4) without sufficient trimming of scales the wavelet-based estimators are heavily biased.......In this paper we compare through Monte Carlo simulations the finite sample properties of estimators of the fractional differencing parameter, d. This involves frequency domain, time domain, and wavelet based approaches, and we consider both parametric and semiparametric estimation methods....... The estimators are briefly introduced and compared, and the criteria adopted for measuring finite sample performance are bias and root mean squared error. Most importantly, the simulations reveal that (1) the frequency domain maximum likelihood procedure is superior to the time domain parametric methods, (2) all...

  13. Estimation of long memory in volatility using wavelets

    Czech Academy of Sciences Publication Activity Database

    Kraicová, Lucie; Baruník, Jozef

    2017-01-01

    Roč. 21, č. 3 (2017), č. článku 20160101. ISSN 1081-1826 R&D Projects: GA ČR GA13-32263S EU Projects: European Commission 612955 - FINMAP Institutional support: RVO:67985556 Keywords : long memory * wavelets * whittle Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 0.649, year: 2016 http://library.utia.cas.cz/separaty/2017/E/barunik-0478480.pdf

  14. Kernel and wavelet density estimators on manifolds and more general metric spaces

    DEFF Research Database (Denmark)

    Cleanthous, G.; Georgiadis, Athanasios; Kerkyacharian, G.

    We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized...... spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established, which are analogous to the existing results in the classical setting of real...

  15. Hydrological model performance and parameter estimation in the wavelet-domain

    Directory of Open Access Journals (Sweden)

    B. Schaefli

    2009-10-01

    Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.

  16. Value at risk estimation with entropy-based wavelet analysis in exchange markets

    Science.gov (United States)

    He, Kaijian; Wang, Lijun; Zou, Yingchao; Lai, Kin Keung

    2014-08-01

    In recent years, exchange markets are increasingly integrated together. Fluctuations and risks across different exchange markets exhibit co-moving and complex dynamics. In this paper we propose the entropy-based multivariate wavelet based approaches to analyze the multiscale characteristic in the multidimensional domain and improve further the Value at Risk estimation reliability. Wavelet analysis has been introduced to construct the entropy-based Multiscale Portfolio Value at Risk estimation algorithm to account for the multiscale dynamic correlation. The entropy measure has been proposed as the more effective measure with the error minimization principle to select the best basis when determining the wavelet families and the decomposition level to use. The empirical studies conducted in this paper have provided positive evidence as to the superior performance of the proposed approach, using the closely related Chinese Renminbi and European Euro exchange market.

  17. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    Science.gov (United States)

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  18. Robust Wavelet Estimation to Eliminate Simultaneously the Effects of Boundary Problems, Outliers, and Correlated Noise

    Directory of Open Access Journals (Sweden)

    Alsaidi M. Altaher

    2012-01-01

    Full Text Available Classical wavelet thresholding methods suffer from boundary problems caused by the application of the wavelet transformations to a finite signal. As a result, large bias at the edges and artificial wiggles occur when the classical boundary assumptions are not satisfied. Although polynomial wavelet regression and local polynomial wavelet regression effectively reduce the risk of this problem, the estimates from these two methods can be easily affected by the presence of correlated noise and outliers, giving inaccurate estimates. This paper introduces two robust methods in which the effects of boundary problems, outliers, and correlated noise are simultaneously taken into account. The proposed methods combine thresholding estimator with either a local polynomial model or a polynomial model using the generalized least squares method instead of the ordinary one. A primary step that involves removing the outlying observations through a statistical function is considered as well. The practical performance of the proposed methods has been evaluated through simulation experiments and real data examples. The results are strong evidence that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating the effects of outliers and correlated noise.

  19. Chernobyl source term estimation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Harvey, T.F.; Lange, R.

    1990-09-01

    The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the 131 I and 90 Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs

  20. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  1. Wavelet Based Denoising for the Estimation of the State of Charge for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    2018-05-01

    Full Text Available In practical electric vehicle applications, the noise of original discharging/charging voltage (DCV signals are inevitable, which comes from electromagnetic interference and the measurement noise of the sensors. To solve such problems, the Discrete Wavelet Transform (DWT based state of charge (SOC estimation method is proposed in this paper. Through a multi-resolution analysis, the original DCV signals with noise are decomposed into different frequency sub-bands. The desired de-noised DCV signals are then reconstructed by utilizing the inverse discrete wavelet transform, based on the sure rule. With the de-noised DCV signal, the SOC and the parameters are obtained using the adaptive extended Kalman Filter algorithm, and the adaptive forgetting factor recursive least square method. Simulation and experimental results show that the SOC estimation error is less than 1%, which indicates an effective improvement in SOC estimation accuracy.

  2. Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2016-08-01

    Full Text Available This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 , which is better than the convergence rate O P ( n − 1 / 4 for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.

  3. Application of the unwrapped phase inversion to land data without source estimation

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali; DeVault, Bryan

    2015-01-01

    and the source wavelet are updated simultaneously and interact with each other. We suggest a source-independent unwrapped phase inversion approach instead of relying on source-estimation from this land data. In the source-independent approach, the phase

  4. Estimation of moderator temperature coefficient of actual PWRs using wavelet transform

    International Nuclear Information System (INIS)

    Katsumata, Ryosuke; Shimazu, Yoichiro

    2001-01-01

    Recently, an applicability of wavelet transform for estimation of moderator temperature coefficient was shown in numerical simulations. The basic concept of the wavelet transform is to eliminate noise in the measured signals. The concept is similar to that of Fourier transform method in which the analyzed reactivity component is divided by the analyzed component of relevant parameter. In order to apply the method to analyze measured data in actual PWRs, we carried out numerical simulations on the data that were more similar to actual data and proposed a method for estimation of moderator temperature coefficient using the wavelet transform. In the numerical simulations we obtained moderator temperature coefficients with the relative error of less than 4%. Based on this result we applied this method to analyze measured data in actual PWRs and the results have proved that the method is applicable for estimation of moderator temperature coefficients in the actual PWRs. It is expected that this method can reduce the required data length during the measurement. We expect to expand the applicability of this method to estimate the other reactivity coefficients with the data of short transient. (author)

  5. A continuous wavelet transform approach for harmonic parameters estimation in the presence of impulsive noise

    Science.gov (United States)

    Dai, Yu; Xue, Yuan; Zhang, Jianxun

    2016-01-01

    Impulsive noise caused by some random events has the main character of short rise-time and wide frequency spectrum range, so it has the potential to degrade the performance and reliability of the harmonic estimation. This paper focuses on the harmonic estimation procedure based on continuous wavelet transform (CWT) when the analyzed signal is corrupted by the impulsive noise. The digital CWT of both the time-varying sinusoidal signal and the impulsive noise are analyzed, and there are two cross ridges in the time-frequency plane of CWT, which are generated by the signal and the noise separately. In consideration of the amplitude of the noise and the number of the spike event, two inequalities are derived to provide limitations on the wavelet parameters. Based on the amplitude distribution of the noise, the optimal wavelet parameters determined by solving these inequalities are used to suppress the contamination of the noise, as well as increase the amplitude of the ridge corresponding to the signal, so the parameters of each harmonic component can be estimated accurately. The proposed procedure is applied to a numerical simulation and a bone vibration signal test giving satisfactory results of stationary and time-varying harmonic parameter estimation.

  6. Selection of the wavelet function for the frequencies estimation; Seleccion de la funcion wavelet para la estimacion de frecuencias

    Energy Technology Data Exchange (ETDEWEB)

    Garcia R, A. [ININ, Carretera Mexico-Toluca S/N, 52750 La Marquesa, Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: ramador@nuclear.inin.mx

    2007-07-01

    At the moment the signals are used to diagnose the state of the systems, by means of the extraction of their more important characteristics such as the frequencies, tendencies, changes and temporary evolutions. This characteristics are detected by means of diverse analysis techniques, as Autoregressive methods, Fourier Transformation, Fourier transformation in short time, Wavelet transformation, among others. The present work uses the one Wavelet transformation because it allows to analyze stationary, quasi-stationary and transitory signals in the time-frequency plane. It also describes a methodology to select the scales and the Wavelet function to be applied the one Wavelet transformation with the objective of detecting to the dominant system frequencies. (Author)

  7. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi

    2016-10-01

    Full Text Available Introduction One reason for the complexity of hydrological phenomena prediction, especially time series is existence of features such as trend, noise and high-frequency oscillations. These complex features, especially noise, can be detected or removed by preprocessing. Appropriate preprocessing causes estimation of these phenomena become easier. Preprocessing in the data driven models such as artificial neural network, gene expression programming, support vector machine, is more effective because the quality of data in these models is important. Present study, by considering diagnosing and data transformation as two different preprocessing, tries to improve the results of intelligent models. In this study two different intelligent models, Artificial Neural Network and Gene Expression Programming, are applied to estimation of daily suspended sediment load. Wavelet transforms and logarithmic transformation is used for diagnosing and data transformation, respectively. Finally, the impacts of preprocessing on the results of intelligent models are evaluated. Materials and Methods In this study, Gene Expression Programming and Artificial Neural Network are used as intelligent models for suspended sediment load estimation, then the impacts of diagnosing and logarithmic transformations approaches as data preprocessor are evaluated and compared to the result improvement. Two different logarithmic transforms are considered in this research, LN and LOG. Wavelet transformation is used to time series denoising. In order to denoising by wavelet transforms, first, time series can be decomposed at one level (Approximation part and detail part and second, high-frequency part (detail will be removed as noise. According to the ability of gene expression programming and artificial neural network to analysis nonlinear systems; daily values of suspended sediment load of the Skunk River in USA, during a 5-year period, are investigated and then estimated.4 years of

  8. A Novel Intelligent Method for the State of Charge Estimation of Lithium-Ion Batteries Using a Discrete Wavelet Transform-Based Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Deyu Cui

    2018-04-01

    Full Text Available State of charge (SOC estimation is becoming increasingly important, along with electric vehicle (EV rapid development, while SOC is one of the most significant parameters for the battery management system, indicating remaining energy and ensuring the safety and reliability of EV. In this paper, a hybrid wavelet neural network (WNN model combining the discrete wavelet transform (DWT method and adaptive WNN is proposed to estimate the SOC of lithium-ion batteries. The WNN model is trained by Levenberg-Marquardt (L-M algorithm, whose inputs are processed by discrete wavelet decomposition and reconstitution. Compared with back-propagation neural network (BPNN, L-M based BPNN (LMBPNN, L-M based WNN (LMWNN, DWT with L-M based BPNN (DWTLMBPNN and extend Kalman filter (EKF, the proposed intelligent SOC estimation method is validated and proved to be effective. Under the New European Driving Cycle (NEDC, the mean absolute error and maximum error can be reduced to 0.59% and 3.13%, respectively. The characteristics of high accuracy and strong robustness of the proposed method are verified by comparison study and robustness evaluation results (e.g., measurement noise test and untrained driving cycle test.

  9. Effects of the near field on source-independent q estimation

    NARCIS (Netherlands)

    Shigapov, R.; Kashtan, B.; Droujinine, A.; Mulder, W.A.

    2013-01-01

    We consider the problem of Q estimation from microseismic and from perforation shot data. Assuming that the source wavelet is not well known, we focused on the spectral ratio method and on source-independent viscoelastic full waveform inversion. We derived 3-D near-field approximations of monopole

  10. BSDWormer; an Open Source Implementation of a Poisson Wavelet Multiscale Analysis for Potential Fields

    Science.gov (United States)

    Horowitz, F. G.; Gaede, O.

    2014-12-01

    Wavelet multiscale edge analysis of potential fields (a.k.a. "worms") has been known since Moreau et al. (1997) and was independently derived by Hornby et al. (1999). The technique is useful for producing a scale-explicit overview of the structures beneath a gravity or magnetic survey, including establishing the location and estimating the attitude of surface features, as well as incorporating information about the geometric class (point, line, surface, volume, fractal) of the underlying sources — in a fashion much like traditional structural indices from Euler solutions albeit with better areal coverage. Hornby et al. (2002) show that worms form the locally highest concentration of horizontal edges of a given strike — which in conjunction with the results from Mallat and Zhong (1992) induces a (non-unique!) inversion where the worms are physically interpretable as lateral boundaries in a source distribution that produces a close approximation of the observed potential field. The technique has enjoyed widespread adoption and success in the Australian mineral exploration community — including "ground truth" via successfully drilling structures indicated by the worms. Unfortunately, to our knowledge, all implementations of the code to calculate the worms/multiscale edges (including Horowitz' original research code) are either part of commercial software packages, or have copyright restrictions that impede the use of the technique by the wider community. The technique is completely described mathematically in Hornby et al. (1999) along with some later publications. This enables us to re-implement from scratch the code required to calculate and visualize the worms. We are freely releasing the results under an (open source) BSD two-clause software license. A git repository is available at . We will give an overview of the technique, show code snippets using the codebase, and present visualization results for example datasets (including the Surat basin of Australia

  11. Portfolio Value at Risk Estimate for Crude Oil Markets: A Multivariate Wavelet Denoising Approach

    Directory of Open Access Journals (Sweden)

    Kin Keung Lai

    2012-04-01

    Full Text Available In the increasingly globalized economy these days, the major crude oil markets worldwide are seeing higher level of integration, which results in higher level of dependency and transmission of risks among different markets. Thus the risk of the typical multi-asset crude oil portfolio is influenced by dynamic correlation among different assets, which has both normal and transient behaviors. This paper proposes a novel multivariate wavelet denoising based approach for estimating Portfolio Value at Risk (PVaR. The multivariate wavelet analysis is introduced to analyze the multi-scale behaviors of the correlation among different markets and the portfolio volatility behavior in the higher dimensional time scale domain. The heterogeneous data and noise behavior are addressed in the proposed multi-scale denoising based PVaR estimation algorithm, which also incorporatesthe mainstream time series to address other well known data features such as autocorrelation and volatility clustering. Empirical studies suggest that the proposed algorithm outperforms the benchmark ExponentialWeighted Moving Average (EWMA and DCC-GARCH model, in terms of conventional performance evaluation criteria for the model reliability.

  12. Time variation of the electromagnetic transfer function of the earth estimated by using wavelet transform.

    Science.gov (United States)

    Suto, Noriko; Harada, Makoto; Izutsu, Jun; Nagao, Toshiyasu

    2006-07-01

    In order to accurately estimate the geomagnetic transfer functions in the area of the volcano Mt. Iwate (IWT), we applied the interstation transfer function (ISTF) method to the three-component geomagnetic field data observed at Mt. Iwate station (IWT), using the Kakioka Magnetic Observatory, JMA (KAK) as remote reference station. Instead of the conventional Fourier transform, in which temporary transient noises badly degrade the accuracy of long term properties, continuous wavelet transform has been used. The accuracy of the results was as high as that of robust estimations of transfer functions obtained by the Fourier transform method. This would provide us with possibilities for routinely monitoring the transfer functions, without sophisticated statistical procedures, to detect changes in the underground electrical conductivity structure.

  13. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    Science.gov (United States)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model

  14. Acoustic emission source location in plates using wavelet analysis and cross time frequency spectrum.

    Science.gov (United States)

    Mostafapour, A; Davoodi, S; Ghareaghaji, M

    2014-12-01

    In this study, the theories of wavelet transform and cross-time frequency spectrum (CTFS) are used to locate AE source with frequency-varying wave velocity in plate-type structures. A rectangular array of four sensors is installed on the plate. When an impact is generated by an artificial AE source such as Hsu-Nielsen method of pencil lead breaking (PLB) at any position of the plate, the AE signals will be detected by four sensors at different times. By wavelet packet decomposition, a packet of signals with frequency range of 0.125-0.25MHz is selected. The CTFS is calculated by the short-time Fourier transform of the cross-correlation between considered packets captured by AE sensors. The time delay is calculated when the CTFS reaches the maximum value and the corresponding frequency is extracted per this maximum value. The resulting frequency is used to calculate the group velocity of wave velocity in combination with dispersive curve. The resulted locating error shows the high precision of proposed algorithm. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. USING THE METHODS OF WAVELET ANALYSIS AND SINGULAR SPECTRUM ANALYSIS IN THE STUDY OF RADIO SOURCE BL LAC

    OpenAIRE

    Donskykh, G. I.; Ryabov, M. I.; Sukharev, A. I.; Aller, M.

    2014-01-01

    We investigated the monitoring data of extragalactic source BL Lac. This monitoring was held withUniversityofMichigan26-meter radio  telescope. To study flux density of extragalactic source BL Lac at frequencies of 14.5, 8 and 4.8 GHz, the wavelet analysis and singular spectrum analysis were used. Calculating the integral wavelet spectra allowed revealing long-term  components  (~7-8 years) and short-term components (~ 1-4 years) in BL Lac. Studying of VLBI radio maps (by the program Mojave) ...

  16. Point source detection using the Spherical Mexican Hat Wavelet on simulated all-sky Planck maps

    Science.gov (United States)

    Vielva, P.; Martínez-González, E.; Gallegos, J. E.; Toffolatti, L.; Sanz, J. L.

    2003-09-01

    We present an estimation of the point source (PS) catalogue that could be extracted from the forthcoming ESA Planck mission data. We have applied the Spherical Mexican Hat Wavelet (SMHW) to simulated all-sky maps that include cosmic microwave background (CMB), Galactic emission (thermal dust, free-free and synchrotron), thermal Sunyaev-Zel'dovich effect and PS emission, as well as instrumental white noise. This work is an extension of the one presented in Vielva et al. We have developed an algorithm focused on a fast local optimal scale determination, that is crucial to achieve a PS catalogue with a large number of detections and a low flux limit. An important effort has been also done to reduce the CPU time processor for spherical harmonic transformation, in order to perform the PS detection in a reasonable time. The presented algorithm is able to provide a PS catalogue above fluxes: 0.48 Jy (857 GHz), 0.49 Jy (545 GHz), 0.18 Jy (353 GHz), 0.12 Jy (217 GHz), 0.13 Jy (143 GHz), 0.16 Jy (100 GHz HFI), 0.19 Jy (100 GHz LFI), 0.24 Jy (70 GHz), 0.25 Jy (44 GHz) and 0.23 Jy (30 GHz). We detect around 27 700 PS at the highest frequency Planck channel and 2900 at the 30-GHz one. The completeness level are: 70 per cent (857 GHz), 75 per cent (545 GHz), 70 per cent (353 GHz), 80 per cent (217 GHz), 90 per cent (143 GHz), 85 per cent (100 GHz HFI), 80 per cent (100 GHz LFI), 80 per cent (70 GHz), 85 per cent (44 GHz) and 80 per cent (30 GHz). In addition, we can find several PS at different channels, allowing the study of the spectral behaviour and the physical processes acting on them. We also present the basic procedure to apply the method in maps convolved with asymmetric beams. The algorithm takes ~72 h for the most CPU time-demanding channel (857 GHz) in a Compaq HPC320 (Alpha EV68 1-GHz processor) and requires 4 GB of RAM memory; the CPU time goes as O[NRoN3/2pix log(Npix)], where Npix is the number of pixels in the map and NRo is the number of optimal scales needed.

  17. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Peak center and area estimation in gamma-ray energy spectra using a Mexican-hat wavelet

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Zhang-jian; Chen, Chuan; Luo, Jun-song; Xie, Xing-hong; Ge, Liang-quan [School of Information Science & Technology, Chengdu University of Technology, Chengdu (China); Wu, Qi-fan [Department of Engineering Physics, Tsinghua University, Beijing (China)

    2017-06-21

    Wavelet analysis is commonly used to detect and localize peaks within a signal, such as in Gamma-ray energy spectra. This paper presents a peak area estimation method based on a new wavelet analysis. Another Mexican Hat Wavelet Signal (MHWS) named after the new MHWS is obtained with the convolution of a Gaussian signal and a MHWS. During the transform, the overlapping background on the Gaussian signal caused by Compton scattering can be subtracted because the impulse response function MHWS is a second-order smooth function, and the amplitude of the maximum within the new MHWS is the net height corresponding to the Gaussian signal height, which can be used to estimate the Gaussian peak area. Moreover, the zero-crossing points within the new MHWS contain the information of the Gaussian variance whose valve should be obtained when the Gaussian peak area is estimated. Further, the new MHWS center is also the Gaussian peak center. With that distinguishing feature, the channel address of a characteristic peak center can be accurately obtained which is very useful in the stabilization of airborne Gamma energy spectra. In particular, a method for determining the correction coefficient k is given, where the peak area is calculated inaccurately because the value of the scale factor in wavelet transform is too small. The simulation and practical applications show the feasibility of the proposed peak center and area estimation method.

  19. Estimation of leaf water content from far infrared (2.5-14µm) spectra using continuous wavelet analysis

    NARCIS (Netherlands)

    Ullah, S.; Skidmore, A.K.; Naeem, M.; Schlerf, M.

    2012-01-01

    The objective of this study was to estimate leaf water content based on continuous wavelet analysis from the far infrared (2.5 - 14.0 μm) spectra. The entire dataset comprised of 394 far infrared spectra which were divided into calibration (262 spectra) and validation (132 spectra) subsets. The far

  20. Fetal QRS detection and heart rate estimation: a wavelet-based approach

    International Nuclear Information System (INIS)

    Almeida, Rute; Rocha, Ana Paula; Gonçalves, Hernâni; Bernardes, João

    2014-01-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. (paper)

  1. Application of the wavelet ridges method for the estimation of the decay ratio in Boiling Water Reactors; Atomos para el desarrollo de Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Prieto G, A.; Espinosa P, G. [UAM-I, 09340 Mexico D.F. (Mexico)

    2008-07-01

    A wavelet ridges application is proposed as a simple method to determine the evolution of the linear stability parameters of a BWR NPP using neutronic noise signals. The wavelets ridges are used to track the instantaneous frequencies contained in a signal and to estimate the Decay Ratio (DR). The first step of the method consists of de noising the analyzed signals by Discrete Wavelet Transform (DWT) to reduce the interference of high-frequency noise and concentrate the analysis in the band where crucial frequencies are presented. Next, is computation of the wavelet ridges by Continuous Wavelet Transform (CWT) to obtain the modulus maxima from the normalized scalogram of the signal. In general, associations with these wavelets ridges can be used to compute instantaneous frequency contained in the signal and the DR evolution with the measurement. To study the performance of the wavelet ridges method, by computing the evolution of the linear stability parameters, both simulated and real neutronic signals were considered. The simulated signal is used to validate methodically and to study some features of the wavelet ridges method. To demonstrate the method applicability a real neutronic signal from the instability event in Laguna Verde was analyzed. The investigations show that most of the local energies of the signal are concentrated in the wavelet ridges and DR variations of the signals were observed along the measurements. (Author)

  2. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    Science.gov (United States)

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  3. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    Science.gov (United States)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio

  4. Wavelets, vibrations and scalings

    CERN Document Server

    Meyer, Yves

    1997-01-01

    Physicists and mathematicians are intensely studying fractal sets of fractal curves. Mandelbrot advocated modeling of real-life signals by fractal or multifractal functions. One example is fractional Brownian motion, where large-scale behavior is related to a corresponding infrared divergence. Self-similarities and scaling laws play a key role in this new area. There is a widely accepted belief that wavelet analysis should provide the best available tool to unveil such scaling laws. And orthonormal wavelet bases are the only existing bases which are structurally invariant through dyadic dilations. This book discusses the relevance of wavelet analysis to problems in which self-similarities are important. Among the conclusions drawn are the following: 1) A weak form of self-similarity can be given a simple characterization through size estimates on wavelet coefficients, and 2) Wavelet bases can be tuned in order to provide a sharper characterization of this self-similarity. A pioneer of the wavelet "saga", Meye...

  5. Wavelets in scientific computing

    DEFF Research Database (Denmark)

    Nielsen, Ole Møller

    1998-01-01

    the FWT can be used as a front-end for efficient image compression schemes. Part II deals with vector-parallel implementations of several variants of the Fast Wavelet Transform. We develop an efficient and scalable parallel algorithm for the FWT and derive a model for its performance. Part III...... supported wavelets in the context of multiresolution analysis. These wavelets are particularly attractive because they lead to a stable and very efficient algorithm, namely the fast wavelet transform (FWT). We give estimates for the approximation characteristics of wavelets and demonstrate how and why...... is an investigation of the potential for using the special properties of wavelets for solving partial differential equations numerically. Several approaches are identified and two of them are described in detail. The algorithms developed are applied to the nonlinear Schrödinger equation and Burgers' equation...

  6. Real-time estimation of optical flow based on optimized haar wavelet features

    DEFF Research Database (Denmark)

    Salmen, Jan; Caup, Lukas; Igel, Christian

    2011-01-01

    -objective optimization. In this work, we build on a popular algorithm developed for realtime applications. It is originally based on the Census transform and benefits from this encoding for table-based matching and tracking of interest points. We propose to use the more universal Haar wavelet features instead...

  7. Discovering Wavelets

    CERN Document Server

    Aboufadel, Edward

    1999-01-01

    An accessible and practical introduction to wavelets. With applications in image processing, audio restoration, seismology, and elsewhere, wavelets have been the subject of growing excitement and interest over the past several years. Unfortunately, most books on wavelets are accessible primarily to research mathematicians. Discovering Wavelets presents basic and advanced concepts of wavelets in a way that is accessible to anyone with only a fundamental knowledge of linear algebra. The basic concepts of wavelet theory are introduced in the context of an explanation of how the FBI uses wavelets

  8. Estimation of the Tool Condition by Applying the Wavelet Transform to Acoustic Emission Signals

    International Nuclear Information System (INIS)

    Gomez, M. P.; Piotrkowski, R.; Ruzzante, J. E.; D'Attellis, C. E.

    2007-01-01

    This work follows the search of parameters to evaluate the tool condition in machining processes. The selected sensing technique is acoustic emission and it is applied to a turning process of steel samples. The obtained signals are studied using the wavelet transformation. The tool wear level is quantified as a percentage of the final wear specified by the Standard ISO 3685. The amplitude and relevant scale obtained of acoustic emission signals could be related with the wear level

  9. Signal-dependent independent component analysis by tunable mother wavelets

    International Nuclear Information System (INIS)

    Seo, Kyung Ho

    2006-02-01

    The objective of this study is to improve the standard independent component analysis when applied to real-world signals. Independent component analysis starts from the assumption that signals from different physical sources are statistically independent. But real-world signals such as EEG, ECG, MEG, and fMRI signals are not statistically independent perfectly. By definition, standard independent component analysis algorithms are not able to estimate statistically dependent sources, that is, when the assumption of independence does not hold. Therefore before independent component analysis, some preprocessing stage is needed. This paper started from simple intuition that wavelet transformed source signals by 'well-tuned' mother wavelet will be simplified sufficiently, and then the source separation will show better results. By the correlation coefficient method, the tuning process between source signal and tunable mother wavelet was executed. Gamma component of raw EEG signal was set to target signal, and wavelet transform was executed by tuned mother wavelet and standard mother wavelets. Simulation results by these wavelets was shown

  10. Realized wavelet-based estimation of integrated variance and jumps in the presence of noise

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Vácha, Lukáš

    2015-01-01

    Roč. 15, č. 8 (2015), s. 1347-1364 ISSN 1469-7688 R&D Projects: GA ČR GA13-32263S EU Projects: European Commission 612955 - FINMAP Grant - others:GA ČR(CZ) GA13-24313S Institutional support: RVO:67985556 Keywords : quadratic variation * realized variance * jumps * market microstructure noise * wavelets Subject RIV: AH - Economics Impact factor: 0.794, year: 2015 http://library.utia.cas.cz/separaty/2014/E/barunik-0434203.pdf

  11. Wavelets in functional data analysis

    CERN Document Server

    Morettin, Pedro A; Vidakovic, Brani

    2017-01-01

    Wavelet-based procedures are key in many areas of statistics, applied mathematics, engineering, and science. This book presents wavelets in functional data analysis, offering a glimpse of problems in which they can be applied, including tumor analysis, functional magnetic resonance and meteorological data. Starting with the Haar wavelet, the authors explore myriad families of wavelets and how they can be used. High-dimensional data visualization (using Andrews' plots), wavelet shrinkage (a simple, yet powerful, procedure for nonparametric models) and a selection of estimation and testing techniques (including a discussion on Stein’s Paradox) make this a highly valuable resource for graduate students and experienced researchers alike.

  12. Wavelet analysis

    CERN Document Server

    Cheng, Lizhi; Luo, Yong; Chen, Bo

    2014-01-01

    This book could be divided into two parts i.e. fundamental wavelet transform theory and method and some important applications of wavelet transform. In the first part, as preliminary knowledge, the Fourier analysis, inner product space, the characteristics of Haar functions, and concepts of multi-resolution analysis, are introduced followed by a description on how to construct wavelet functions both multi-band and multi wavelets, and finally introduces the design of integer wavelets via lifting schemes and its application to integer transform algorithm. In the second part, many applications are discussed in the field of image and signal processing by introducing other wavelet variants such as complex wavelets, ridgelets, and curvelets. Important application examples include image compression, image denoising/restoration, image enhancement, digital watermarking, numerical solution of partial differential equations, and solving ill-conditioned Toeplitz system. The book is intended for senior undergraduate stude...

  13. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    International Nuclear Information System (INIS)

    Shukri Mohd

    2013-01-01

    Full-text: Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed Wavelet Transform analysis and Modal Location (WTML) based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup using H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) technique and DeltaTlocation. The results of the study show that the WTML method produces more accurate location results compared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure. (author)

  14. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    International Nuclear Information System (INIS)

    Mohd, Shukri; Holford, Karen M.; Pullin, Rhys

    2014-01-01

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup using H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure

  15. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Mohd, Shukri [Nondestructive Testing Group, Industrial Technology Division, Malaysian Nuclear Agency, 43000, Bangi, Selangor (Malaysia); Holford, Karen M.; Pullin, Rhys [Cardiff School of Engineering, Cardiff University, Queen' s Buildings, The Parade, CARDIFF CF24 3AA (United Kingdom)

    2014-02-12

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup using H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.

  16. A new hybrid support vector machine–wavelet transform approach for estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Shamshirband, Shahaboddin; Tong, Chong Wen; Arif, Muhammad; Petković, Dalibor; Ch, Sudheer

    2015-01-01

    Highlights: • Horizontal global solar radiation (HGSR) is predicted based on a new hybrid approach. • Support Vector Machines and Wavelet Transform algorithm (SVM–WT) are combined. • Different sets of meteorological elements are used to predict HGSR. • The precision of SVM–WT is assessed thoroughly against ANN, GP and ARMA. • SVM–WT would be an appealing approach to predict HGSR and outperforms others. - Abstract: In this paper, a new hybrid approach by combining the Support Vector Machine (SVM) with Wavelet Transform (WT) algorithm is developed to predict horizontal global solar radiation. The predictions are conducted on both daily and monthly mean scales for an Iranian coastal city. The proposed SVM–WT method is compared against other existing techniques to demonstrate its efficiency and viability. Three different sets of parameters are served as inputs to establish three models. The results indicate that the model using relative sunshine duration, difference between air temperatures, relative humidity, average temperature and extraterrestrial solar radiation as inputs shows higher performance than other models. The statistical analysis demonstrates that SVM–WT approach enjoys very good performance and outperforms other approaches. For the best SVM–WT model, the obtained statistical indicators of mean absolute percentage error, mean absolute bias error, root mean square error, relative root mean square error and coefficient of determination for daily estimation are 6.9996%, 0.8405 MJ/m 2 , 1.4245 MJ/m 2 , 7.9467% and 0.9086, respectively. Also, for monthly mean estimation the values are 3.2601%, 0.5104 MJ/m 2 , 0.6618 MJ/m 2 , 3.6935% and 0.9742, respectively. Based upon relative percentage error, for the best SVM–WT model, 88.70% of daily predictions fall within the acceptable range of −10% to +10%

  17. Application of the unwrapped phase inversion to land data without source estimation

    KAUST Repository

    Choi, Yun Seok

    2015-08-19

    Unwrapped phase inversion with a strong damping was developed to solve the phase wrapping problem in frequency-domain waveform inversion. In this study, we apply the unwrapped phase inversion to band-limited real land data, for which the available minimum frequency is quite high. An important issue of the data is a strong ambiguity of source-ignition time (or source shift) shown in a seismogram. A source-estimation approach does not fully address the issue of source shift, since the velocity model and the source wavelet are updated simultaneously and interact with each other. We suggest a source-independent unwrapped phase inversion approach instead of relying on source-estimation from this land data. In the source-independent approach, the phase of the modeled data converges not to the exact phase value of the observed data, but to the relative phase value (or the trend of phases); thus it has the potential to solve the ambiguity of source-ignition time in a seismogram and work better than the source-estimation approach. Numerical examples show the validation of the source-independent unwrapped phase inversion, especially for land field data having an ambiguity in the source-ignition time.

  18. Source Estimation by Full Wave Form Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Sjögreen, Björn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Petersson, N. Anders [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing

    2013-08-07

    Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.

  19. Wavelet basics

    CERN Document Server

    Chan, Y T

    1995-01-01

    Since the study of wavelets is a relatively new area, much of the research coming from mathematicians, most of the literature uses terminology, concepts and proofs that may, at times, be difficult and intimidating for the engineer. Wavelet Basics has therefore been written as an introductory book for scientists and engineers. The mathematical presentation has been kept simple, the concepts being presented in elaborate detail in a terminology that engineers will find familiar. Difficult ideas are illustrated with examples which will also aid in the development of an intuitive insight. Chapter 1 reviews the basics of signal transformation and discusses the concepts of duals and frames. Chapter 2 introduces the wavelet transform, contrasts it with the short-time Fourier transform and clarifies the names of the different types of wavelet transforms. Chapter 3 links multiresolution analysis, orthonormal wavelets and the design of digital filters. Chapter 4 gives a tour d'horizon of topics of current interest: wave...

  20. The Chandra Source Catalog 2.0: Estimating Source Fluxes

    Science.gov (United States)

    Primini, Francis Anthony; Allen, Christopher E.; Miller, Joseph; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    The Second Chandra Source Catalog (CSC2.0) will provide information on approximately 316,000 point or compact extended x-ray sources, derived from over 10,000 ACIS and HRC-I imaging observations available in the public archive at the end of 2014. As in the previous catalog release (CSC1.1), fluxes for these sources will be determined separately from source detection, using a Bayesian formalism that accounts for background, spatial resolution effects, and contamination from nearby sources. However, the CSC2.0 procedure differs from that used in CSC1.1 in three important aspects. First, for sources in crowded regions in which photometric apertures overlap, fluxes are determined jointly, using an extension of the CSC1.1 algorithm, as discussed in Primini & Kashyap (2014ApJ...796…24P). Second, an MCMC procedure is used to estimate marginalized posterior probability distributions for source fluxes. Finally, for sources observed in multiple observations, a Bayesian Blocks algorithm (Scargle, et al. 2013ApJ...764..167S) is used to group observations into blocks of constant source flux.In this poster we present details of the CSC2.0 photometry algorithms and illustrate their performance in actual CSC2.0 datasets.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  1. Application of Modal Parameter Estimation Methods for Continuous Wavelet Transform-Based Damage Detection for Beam-Like Structures

    Directory of Open Access Journals (Sweden)

    Zhi Qiu

    2015-02-01

    Full Text Available This paper presents a hybrid damage detection method based on continuous wavelet transform (CWT and modal parameter identification techniques for beam-like structures. First, two kinds of mode shape estimation methods, herein referred to as the quadrature peaks picking (QPP and rational fraction polynomial (RFP methods, are used to identify the first four mode shapes of an intact beam-like structure based on the hammer/accelerometer modal experiment. The results are compared and validated using a numerical simulation with ABAQUS software. In order to determine the damage detection effectiveness between the QPP-based method and the RFP-based method when applying the CWT technique, the first two mode shapes calculated by the QPP and RFP methods are analyzed using CWT. The experiment, performed on different damage scenarios involving beam-like structures, shows that, due to the outstanding advantage of the denoising characteristic of the RFP-based (RFP-CWT technique, the RFP-CWT method gives a clearer indication of the damage location than the conventionally used QPP-based (QPP-CWT method. Finally, an overall evaluation of the damage detection is outlined, as the identification results suggest that the newly proposed RFP-CWT method is accurate and reliable in terms of detection of damage locations on beam-like structures.

  2. Discrete wavelet transform-based denoising technique for advanced state-of-charge estimator of a lithium-ion battery in electric vehicles

    International Nuclear Information System (INIS)

    Lee, Seongjun; Kim, Jonghoon

    2015-01-01

    Sophisticated data of the experimental DCV (discharging/charging voltage) of a lithium-ion battery is required for high-accuracy SOC (state-of-charge) estimation algorithms based on the state-space ECM (electrical circuit model) in BMSs (battery management systems). However, when sensing noisy DCV signals, erroneous SOC estimation (which results in low BMS performance) is inevitable. Therefore, this manuscript describes the design and implementation of a DWT (discrete wavelet transform)-based denoising technique for DCV signals. The steps for denoising a noisy DCV measurement in the proposed approach are as follows. First, using MRA (multi-resolution analysis), the noise-riding DCV signal is decomposed into different frequency sub-bands (low- and high-frequency components, A n and D n ). Specifically, signal processing of the high frequency component D n that focuses on a short-time interval is necessary to reduce noise in the DCV measurement. Second, a hard-thresholding-based denoising rule is applied to adjust the wavelet coefficients of the DWT to achieve a clear separation between the signal and the noise. Third, the desired de-noised DCV signal is reconstructed by taking the IDWT (inverse discrete wavelet transform) of the filtered detailed coefficients. Finally, this signal is sent to the ECM-based SOC estimation algorithm using an EKF (extended Kalman filter). Experimental results indicate the robustness of the proposed approach for reliable SOC estimation. - Highlights: • Sophisticated data of the experimental DCV is required for high-accuracy SOC. • DWT (discrete wavelet transform)-based denoising technique is newly investigated. • Three steps for denoising a noisy DCV measurement in this work are implemented. • Experimental results indicate the robustness of the proposed work for reliable SOC

  3. Loudness estimation of simultaneous sources using beamforming

    DEFF Research Database (Denmark)

    Song, Woo-keun; Ellermeier, Wolfgang; Minnaar, Pauli

    2006-01-01

    An algorithm is proposed for estimating the loudness of several simultaneous sound sources by means of microphone-array beamforming. The algorithm is derived from two listening experiments in which the loudness of two simultaneous sounds (narrow-band noises with 1-kHz and 3.15-kHz center...... frequencies) was matched to a single sound (2-kHz narrow-band noise). The simultaneous sounds were presented from either one sound source or two spatially separated sources, whereas the single sound was presented from the frontal direction. The results indicate that overall loudness can be calculated...... by summing the loudnesses of the individual sources according to a simple psychophysical relationship....

  4. Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  5. Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    2003-01-01

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  6. Do wavelet filters provide more accurate estimates of reverberation times at low frequencies

    DEFF Research Database (Denmark)

    Sobreira Seoane, Manuel A.; Pérez Cabo, David; Agerkvist, Finn T.

    2016-01-01

    It has been amply demonstrated in the literature that it is not possible to measure acoustic decays without significant errors for low BT values (narrow filters and or low reverberation times). Recently, it has been shown how the main source of distortion in the time envelope of the acoustic deca...

  7. Source term estimation for small sized HTRs

    International Nuclear Information System (INIS)

    Moormann, R.

    1992-08-01

    Accidents which have to be considered are core heat-up, reactivity transients, water of air ingress and primary circuit depressurization. The main effort of this paper belongs to water/air ingress and depressurization, which requires consideration of fission product plateout under normal operation conditions; for the latter it is clearly shown, that absorption (penetration) mechanisms are much less important than assumed sometimes in the past. Source term estimation procedures for core heat-up events are shortly reviewed; reactivity transients are apparently covered by them. Besides a general literature survey including identification of areas with insufficient knowledge this paper contains some estimations on the thermomechanical behaviour of fission products in water in air ingress accidents. Typical source term examples are also presented. In an appendix, evaluations of the AVR experiments VAMPYR-I and -II with respect to plateout and fission product filter efficiency are outlined and used for a validation step of the new plateout code SPATRA. (orig.)

  8. Simultaneous Determination of Source Wavelet and Velocity Profile Using Impulsive Point-Source Reflections from a Layered Fluid

    National Research Council Canada - National Science Library

    Bube, K; Lailly, P; Sacks, P; Santosa, F; Symes, W. W

    1987-01-01

    .... We show that a quasi-impulsive, isotropic point source may be recovered simultaneously with the velocity profile from reflection data over a layered fluid, in linear (perturbation) approximation...

  9. Current Source Density Estimation for Single Neurons

    Directory of Open Access Journals (Sweden)

    Dorottya Cserpán

    2014-03-01

    Full Text Available Recent developments of multielectrode technology made it possible to measure the extracellular potential generated in the neural tissue with spatial precision on the order of tens of micrometers and on submillisecond time scale. Combining such measurements with imaging of single neurons within the studied tissue opens up new experimental possibilities for estimating distribution of current sources along a dendritic tree. In this work we show that if we are able to relate part of the recording of extracellular potential to a specific cell of known morphology we can estimate the spatiotemporal distribution of transmembrane currents along it. We present here an extension of the kernel CSD method (Potworowski et al., 2012 applicable in such case. We test it on several model neurons of progressively complicated morphologies from ball-and-stick to realistic, up to analysis of simulated neuron activity embedded in a substantial working network (Traub et al, 2005. We discuss the caveats and possibilities of this new approach.

  10. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    Science.gov (United States)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  11. Wavelet Transforms: Application to Data Analysis - I -10 ...

    Indian Academy of Sciences (India)

    from 0 to 00, whereas translation index k takes values from -00 .... scaling function in any wavelet basis set. ..... sets derived from diverse sources like stock market, cos- ... [4] G B Folland, From Calculus to Wavelets: A New Mathematical Tech-.

  12. Certain problems concerning wavelets and wavelets packets

    International Nuclear Information System (INIS)

    Siddiqi, A.H.

    1995-09-01

    Wavelets is the outcome of the synthesis of ideas that have emerged in different branches of science and technology, mainly in the last decade. The concept of wavelet packets, which are superpositions of wavelets, has been introduced a couple of years ago. They form bases which retain many properties of wavelets like orthogonality, smoothness and localization. The Walsh orthornomal system is a special case of wavelet packet. The wavelet packets provide at our disposal a library of orthonormal bases, each of which can be used to analyze a given signal of finite energy. The optimal choice is decided by the entropy criterion. In the present paper we discuss results concerning convergence, coefficients, and approximation of wavelet packets series in general and wavelets series in particular. Wavelet packet techniques for solutions of differential equations are also mentioned. (author). 117 refs

  13. Certain problems concerning wavelets and wavelets packets

    Energy Technology Data Exchange (ETDEWEB)

    Siddiqi, A H

    1995-09-01

    Wavelets is the outcome of the synthesis of ideas that have emerged in different branches of science and technology, mainly in the last decade. The concept of wavelet packets, which are superpositions of wavelets, has been introduced a couple of years ago. They form bases which retain many properties of wavelets like orthogonality, smoothness and localization. The Walsh orthornomal system is a special case of wavelet packet. The wavelet packets provide at our disposal a library of orthonormal bases, each of which can be used to analyze a given signal of finite energy. The optimal choice is decided by the entropy criterion. In the present paper we discuss results concerning convergence, coefficients, and approximation of wavelet packets series in general and wavelets series in particular. Wavelet packet techniques for solutions of differential equations are also mentioned. (author). 117 refs.

  14. Wavelet theory and its applications

    Energy Technology Data Exchange (ETDEWEB)

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  15. A new fractional wavelet transform

    Science.gov (United States)

    Dai, Hongzhe; Zheng, Zhibao; Wang, Wei

    2017-03-01

    The fractional Fourier transform (FRFT) is a potent tool to analyze the time-varying signal. However, it fails in locating the fractional Fourier domain (FRFD)-frequency contents which is required in some applications. A novel fractional wavelet transform (FRWT) is proposed to solve this problem. It displays the time and FRFD-frequency information jointly in the time-FRFD-frequency plane. The definition, basic properties, inverse transform and reproducing kernel of the proposed FRWT are considered. It has been shown that an FRWT with proper order corresponds to the classical wavelet transform (WT). The multiresolution analysis (MRA) associated with the developed FRWT, together with the construction of the orthogonal fractional wavelets are also presented. Three applications are discussed: the analysis of signal with time-varying frequency content, the FRFD spectrum estimation of signals that involving noise, and the construction of fractional Harr wavelet. Simulations verify the validity of the proposed FRWT.

  16. Multiscale peak detection in wavelet space.

    Science.gov (United States)

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-07

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  17. Estimation of population dose from all sources in Japan

    International Nuclear Information System (INIS)

    Kusama, Tomoko; Nakagawa, Takeo; Kai, Michiaki; Yoshizawa, Yasuo

    1988-01-01

    The purposes of estimation of population doses are to understand the per-caput doses of the public member from each artificial radiation source and to determine the proportion contributed of the doses from each individual source to the total irradiated population. We divided the population doses into two categories: individual-related and source-related population doses. The individual-related population dose is estimated based on the maximum assumption for use in allocation of the dose limits for members of the public. The source-related population dose is estimated both to justify the sources and practices and to optimize radiation protection. The source-related population dose, therefore, should be estimated as realistically as possible. We investigated all sources that caused exposure to the population in Japan from the above points of view

  18. Workflow for near-surface velocity automatic estimation: Source-domain full-traveltime inversion followed by waveform inversion

    KAUST Repository

    Liu, Lu

    2017-08-17

    This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source-domain FTI is capable of automatically generating a background velocity that can kinematically match the reconstructed plane-wave sources of early arrivals with true plane-wave sources. This method does not require picking first arrivals for inversion, which is one of the most challenging aspects of ray-based first-arrival tomographic inversion. Moreover, compared with conventional Born-based methods, source-domain FTI can distinguish between slower or faster initial model errors via providing the correct sign of the model gradient. In addition, this method does not need estimation of the source wavelet, which is a requirement for receiver-domain wave-equation velocity inversion. The model derived from source-domain FTI is then used as input to early-arrival waveform inversion to obtain the short-wavelength velocity components. We have tested the workflow on synthetic and field seismic data sets. The results show source-domain FTI can generate reasonable background velocities for early-arrival waveform inversion even when subsurface velocity reversals are present and the workflow can produce a high-resolution near-surface velocity model.

  19. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    Science.gov (United States)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  20. Building nonredundant adaptive wavelets by update lifting

    NARCIS (Netherlands)

    H.J.A.M. Heijmans (Henk); B. Pesquet-Popescu; G. Piella (Gema)

    2002-01-01

    textabstractAdaptive wavelet decompositions appear useful in various applications in image and video processing, such as image analysis, compression, feature extraction, denoising and deconvolution, or optic flow estimation. For such tasks it may be important that the multiresolution representations

  1. Taking into account latency, amplitude, and morphology: improved estimation of single-trial ERPs by wavelet filtering and multiple linear regression.

    Science.gov (United States)

    Hu, L; Liang, M; Mouraux, A; Wise, R G; Hu, Y; Iannetti, G D

    2011-12-01

    Across-trial averaging is a widely used approach to enhance the signal-to-noise ratio (SNR) of event-related potentials (ERPs). However, across-trial variability of ERP latency and amplitude may contain physiologically relevant information that is lost by across-trial averaging. Hence, we aimed to develop a novel method that uses 1) wavelet filtering (WF) to enhance the SNR of ERPs and 2) a multiple linear regression with a dispersion term (MLR(d)) that takes into account shape distortions to estimate the single-trial latency and amplitude of ERP peaks. Using simulated ERP data sets containing different levels of noise, we provide evidence that, compared with other approaches, the proposed WF+MLR(d) method yields the most accurate estimate of single-trial ERP features. When applied to a real laser-evoked potential data set, the WF+MLR(d) approach provides reliable estimation of single-trial latency, amplitude, and morphology of ERPs and thereby allows performing meaningful correlations at single-trial level. We obtained three main findings. First, WF significantly enhances the SNR of single-trial ERPs. Second, MLR(d) effectively captures and measures the variability in the morphology of single-trial ERPs, thus providing an accurate and unbiased estimate of their peak latency and amplitude. Third, intensity of pain perception significantly correlates with the single-trial estimates of N2 and P2 amplitude. These results indicate that WF+MLR(d) can be used to explore the dynamics between different ERP features, behavioral variables, and other neuroimaging measures of brain activity, thus providing new insights into the functional significance of the different brain processes underlying the brain responses to sensory stimuli.

  2. Noncontact Surface Roughness Estimation Using 2D Complex Wavelet Enhanced ResNet for Intelligent Evaluation of Milled Metal Surface Quality

    Directory of Open Access Journals (Sweden)

    Weifang Sun

    2018-03-01

    Full Text Available Machined surfaces are rough from a microscopic perspective no matter how finely they are finished. Surface roughness is an important factor to consider during production quality control. Using modern techniques, surface roughness measurements are beneficial for improving machining quality. With optical imaging of machined surfaces as input, a convolutional neural network (CNN can be utilized as an effective way to characterize hierarchical features without prior knowledge. In this paper, a novel method based on CNN is proposed for making intelligent surface roughness identifications. The technical scheme incorporates there elements: texture skew correction, image filtering, and intelligent neural network learning. Firstly, a texture skew correction algorithm, based on an improved Sobel operator and Hough transform, is applied such that surface texture directions can be adjusted. Secondly, two-dimensional (2D dual tree complex wavelet transform (DTCWT is employed to retrieve surface topology information, which is more effective for feature classifications. In addition, residual network (ResNet is utilized to ensure automatic recognition of the filtered texture features. The proposed method has verified its feasibility as well as its effectiveness in actual surface roughness estimation experiments using the material of spheroidal graphite cast iron 500-7 in an agricultural machinery manufacturing company. Testing results demonstrate the proposed method has achieved high-precision surface roughness estimation.

  3. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  4. Adaptive Wavelet Transforms

    Energy Technology Data Exchange (ETDEWEB)

    Szu, H.; Hsu, C. [Univ. of Southwestern Louisiana, Lafayette, LA (United States)

    1996-12-31

    Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.

  5. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

    Science.gov (United States)

    Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

  6. Significance tests for the wavelet cross spectrum and wavelet linear coherence

    Directory of Open Access Journals (Sweden)

    Z. Ge

    2008-12-01

    Full Text Available This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007. Conventional approaches that are used by Torrence and Compo (1998 based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [−π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As

  7. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Science.gov (United States)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the

  8. DOA Estimation of Audio Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Heusdens, Richard

    2016-01-01

    Reverberation is well-known to have a detrimental impact on many localization methods for audio sources. We address this problem by imposing a model for the early reflections as well as a model for the audio source itself. Using these models, we propose two iterative localization methods...... that estimate the direction-of-arrival (DOA) of both the direct path of the audio source and the early reflections. In these methods, the contribution of the early reflections is essentially subtracted from the signal observations before localization of the direct path component, which may reduce the estimation...

  9. The Source Signature Estimator - System Improvements and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sabel, Per; Brink, Mundy; Eidsvig, Seija; Jensen, Lars

    1998-12-31

    This presentation relates briefly to the first part of the joint project on post-survey analysis of shot-by-shot based source signature estimation. The improvements of a Source Signature Estimator system are analysed. The notional source method can give suboptimal results when not inputting the real array geometry, i.e. actual separations between the sub-arrays of an air gun array, to the notional source algorithm. This constraint has been addressed herein and was implemented for the first time in the field in summer 1997. The second part of this study will show the potential advantages for interpretation when the signature estimates are then to be applied in the data processing. 5 refs., 1 fig.

  10. Fine-scale estimation of carbon monoxide and fine particulate matter concentrations in proximity to a road intersection by using wavelet neural network with genetic algorithm

    Science.gov (United States)

    Wang, Zhanyong; Lu, Feng; He, Hong-di; Lu, Qing-Chang; Wang, Dongsheng; Peng, Zhong-Ren

    2015-03-01

    At road intersections, vehicles frequently stop with idling engines during the red-light period and speed up rapidly in the green-light period, which generates higher velocity fluctuation and thus higher emission rates. Additionally, the frequent changes of wind direction further add the highly variable dispersion of pollutants at the street scale. It is, therefore, very difficult to estimate the distribution of pollutant concentrations using conventional deterministic causal models. For this reason, a hybrid model combining wavelet neural network and genetic algorithm (GA-WNN) is proposed for predicting 5-min series of carbon monoxide (CO) and fine particulate matter (PM2.5) concentrations in proximity to an intersection. The proposed model is examined based on the measured data under two situations. As the measured pollutant concentrations are found to be dependent on the distance to the intersection, the model is evaluated in three locations respectively, i.e. 110 m, 330 m and 500 m. Due to the different variation of pollutant concentrations on varied time, the model is also evaluated in peak and off-peak traffic time periods separately. Additionally, the proposed model, together with the back-propagation neural network (BPNN), is examined with the measured data in these situations. The proposed model is found to perform better in predictability and precision for both CO and PM2.5 than BPNN does, implying that the hybrid model can be an effective tool to improve the accuracy of estimating pollutants' distribution pattern at intersections. The outputs of these findings demonstrate the potential of the proposed model to be applicable to forecast the distribution pattern of air pollution in real-time in proximity to road intersection.

  11. Method to Locate Contaminant Source and Estimate Emission Strength

    Directory of Open Access Journals (Sweden)

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  12. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  13. Simultaneous head tissue conductivity and EEG source location estimation.

    Science.gov (United States)

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    International Nuclear Information System (INIS)

    Muske, K.; Howse, J.

    2000-01-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity

  15. Fundamental limits of radio interferometers: calibration and source parameter estimation

    OpenAIRE

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.

    2012-01-01

    We use information theory to derive fundamental limits on the capacity to calibrate next-generation radio interferometers, and measure parameters of point sources for instrument calibration, point source subtraction, and data deconvolution. We demonstrate the implications of these fundamental limits, with particular reference to estimation of the 21cm Epoch of Reionization power spectrum with next-generation low-frequency instruments (e.g., the Murchison Widefield Array -- MWA, Precision Arra...

  16. Fissile mass estimation by pulsed neutron source interrogation

    Energy Technology Data Exchange (ETDEWEB)

    Israelashvili, I., E-mail: israelashvili@gmail.com [Nuclear Research Center of the Negev, P.O.B 9001, Beer Sheva 84190 (Israel); Dubi, C.; Ettedgui, H.; Ocherashvili, A. [Nuclear Research Center of the Negev, P.O.B 9001, Beer Sheva 84190 (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Joint Research Centre, Via E. Fermi, 2749, 21027 Ispra (Italy); Beck, A. [Nuclear Research Center of the Negev, P.O.B 9001, Beer Sheva 84190 (Israel); Roesgen, E.; Crochmore, J.M. [Nuclear Security Unit, Institute for Transuranium Elements, Joint Research Centre, Via E. Fermi, 2749, 21027 Ispra (Italy); Ridnik, T.; Yaar, I. [Nuclear Research Center of the Negev, P.O.B 9001, Beer Sheva 84190 (Israel)

    2015-06-11

    Passive methods for detecting correlated neutrons from spontaneous fissions (e.g. multiplicity and SVM) are widely used for fissile mass estimations. These methods can be used for fissile materials that emit a significant amount of fission neutrons (like plutonium). Active interrogation, in which fissions are induced in the tested material by an external continuous source or by a pulsed neutron source, has the potential advantages of fast measurement, alongside independence of the spontaneous fissions of the tested fissile material, thus enabling uranium measurement. Until recently, using the multiplicity method, for uranium mass estimation, was possible only for active interrogation made with continues neutron source. Pulsed active neutron interrogation measurements were analyzed with techniques, e.g. differential die away analysis (DDA), which ignore or implicitly include the multiplicity effect (self-induced fission chains). Recently, both, the multiplicity and the SVM techniques, were theoretically extended for analyzing active fissile mass measurements, made by a pulsed neutron source. In this study the SVM technique for pulsed neutron source is experimentally examined, for the first time. The measurements were conducted at the PUNITA facility of the Joint Research Centre in Ispra, Italy. First promising results, of mass estimation by the SVM technique using a pulsed neutron source, are presented.

  17. Wavelets in neuroscience

    CERN Document Server

    Hramov, Alexander E; Makarov, Valeri A; Pavlov, Alexey N; Sitnikova, Evgenia

    2015-01-01

    This book examines theoretical and applied aspects of wavelet analysis in neurophysics, describing in detail different practical applications of the wavelet theory in the areas of neurodynamics and neurophysiology and providing a review of fundamental work that has been carried out in these fields over the last decade. Chapters 1 and 2 introduce and review the relevant foundations of neurophysics and wavelet theory, respectively, pointing on one hand to the various current challenges in neuroscience and introducing on the other the mathematical techniques of the wavelet transform in its two variants (discrete and continuous) as a powerful and versatile tool for investigating the relevant neuronal dynamics. Chapter 3 then analyzes results from examining individual neuron dynamics and intracellular processes. The principles for recognizing neuronal spikes from extracellular recordings and the advantages of using wavelets to address these issues are described and combined with approaches based on wavelet neural ...

  18. Multivariate wavelet frames

    CERN Document Server

    Skopina, Maria; Protasov, Vladimir

    2016-01-01

    This book presents a systematic study of multivariate wavelet frames with matrix dilation, in particular, orthogonal and bi-orthogonal bases, which are a special case of frames. Further, it provides algorithmic methods for the construction of dual and tight wavelet frames with a desirable approximation order, namely compactly supported wavelet frames, which are commonly required by engineers. It particularly focuses on methods of constructing them. Wavelet bases and frames are actively used in numerous applications such as audio and graphic signal processing, compression and transmission of information. They are especially useful in image recovery from incomplete observed data due to the redundancy of frame systems. The construction of multivariate wavelet frames, especially bases, with desirable properties remains a challenging problem as although a general scheme of construction is well known, its practical implementation in the multidimensional setting is difficult. Another important feature of wavelet is ...

  19. Using wavelet features for analyzing gamma lines

    International Nuclear Information System (INIS)

    Medhat, M.E.; Abdel-hafiez, A.; Hassan, M.F.; Ali, M.A.; Uzhinskii, V.V.

    2004-01-01

    Data processing methods for analyzing gamma ray spectra with symmetric bell-shaped peaks form are considered. In many cases the peak form is symmetrical bell shaped in particular a Gaussian case is the most often used due to many physical reasons. The problem is how to evaluate parameters of such peaks, i.e. their positions, amplitudes and also their half-widths, that is for a single peak and overlapped peaks. Through wavelet features by using Marr wavelet (Mexican Hat) as a correlation method, it could be to estimate the optimal wavelet parameters and to locate peaks in the spectrum. The performance of the proposed method and others shows a better quality of wavelet transform method

  20. Active SWD using monochromatic source wavelet; Tan`itsu shuhasu no shingen hakei wo mochiita active SWD

    Energy Technology Data Exchange (ETDEWEB)

    Tsuru, T; Kozawa, T [Japan National Oil Corp., Tokyo (Japan); Taniguchi, R [Mitsubishi Electric Corp., Tokyo (Japan); Nishikawa, N [Fuji Research Institute Corp., Tokyo (Japan); Matsuhashi, K

    1997-05-27

    As part of developing efforts for physical exploration technologies for oil reservoirs, this paper describes development of an active seismic while drilling (SWD). The SWD is a seismic exploration method to acquire records equivalent to VSP using seismic waves generated from a bit executing excavation, and is capable of detection and control on a real time basis during the excavation. However, the drawback is that it is subjected to a limitation in the bit. To eliminate this limitation, an artificial seismic source method was devised. In other words, this is an SWD utilizing an artificial seismic source. The contrivance is such that a shot sub containing a magnetic distortion oscillator is attached directly above a bit to generate vibration artificially, and try to utilize larger seismic energy by combining this vibration with that generated from the excavating bit. Frequency band in the seismic source is as narrow as nearly a single frequency waveform. Preparing a time-depth curve from the data and identifying position of a bit making excavation requires reading the initial travel time. A waveform recognition technology was applied, which utilizes a matching evaluation function used in pattern recognition. This made waveform recognition possible at high accuracy. 2 figs., 1 tab.

  1. Wavelets and their uses

    International Nuclear Information System (INIS)

    Dremin, Igor M; Ivanov, Oleg V; Nechitailo, Vladimir A

    2001-01-01

    This review paper is intended to give a useful guide for those who want to apply the discrete wavelet transform in practice. The notion of wavelets and their use in practical computing and various applications are briefly described, but rigorous proofs of mathematical statements are omitted, and the reader is just referred to the corresponding literature. The multiresolution analysis and fast wavelet transform have become a standard procedure for dealing with discrete wavelets. The proper choice of a wavelet and use of nonstandard matrix multiplication are often crucial for the achievement of a goal. Analysis of various functions with the help of wavelets allows one to reveal fractal structures, singularities etc. The wavelet transform of operator expressions helps solve some equations. In practical applications one often deals with the discretized functions, and the problem of stability of the wavelet transform and corresponding numerical algorithms becomes important. After discussing all these topics we turn to practical applications of the wavelet machinery. They are so numerous that we have to limit ourselves to a few examples only. The authors would be grateful for any comments which would move us closer to the goal proclaimed in the first phrase of the abstract. (reviews of topical problems)

  2. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Science.gov (United States)

    Jiang, M.; Cui, B.-Y.; Schmid, N. A.; McLaughlin, M. A.; Cao, Z.-C.

    2017-09-01

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  3. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, M.; Schmid, N. A.; Cao, Z.-C. [Lane Department of Computer Science and Electrical Engineering West Virginia University Morgantown, WV 26506 (United States); Cui, B.-Y.; McLaughlin, M. A. [Department of Physics and Astronomy West Virginia University Morgantown, WV 26506 (United States)

    2017-09-20

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  4. A Comparative Study Of Source Location And Depth Estimates From ...

    African Journals Online (AJOL)

    ... the analytic signal amplitude (ASA) and the local wave number (LWN) of the total intensity magnetic field. In this study, a synthetic magnetic field due to four buried dipoles was analysed to show that estimates of source location and depth can be improved significantly by reducing the data to the pole prior to the application ...

  5. Parameters estimation for X-ray sources: positions

    International Nuclear Information System (INIS)

    Avni, Y.

    1977-01-01

    It is shown that the sizes of the positional error boxes for x-ray sources can be determined by using an estimation method which we have previously formulated generally and applied in spectral analyses. It is explained how this method can be used by scanning x-ray telescopes, by rotating modulation collimators, and by HEAO-A (author)

  6. Lidar method to estimate emission rates from extended sources

    Science.gov (United States)

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  7. Sparse EEG/MEG source estimation via a group lasso.

    Directory of Open Access Journals (Sweden)

    Michael Lim

    Full Text Available Non-invasive recordings of human brain activity through electroencephalography (EEG or magnetoencelphalography (MEG are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches.

  8. Modeling of Geological Objects and Geophysical Fields Using Haar Wavelets

    Directory of Open Access Journals (Sweden)

    A. S. Dolgal

    2014-12-01

    Full Text Available This article is a presentation of application of the fast wavelet transform with basic Haar functions for modeling the structural surfaces and geophysical fields, characterized by fractal features. The multiscale representation of experimental data allows reducing significantly a cost of the processing of large volume data and improving the interpretation quality. This paper presents the algorithms for sectionally prismatic approximation of geological objects, for preliminary estimation of the number of equivalent sources for the analytical approximation of fields, and for determination of the rock magnetization in the upper part of the geological section.

  9. Estimation of Source terms for Emergency Planning and Preparedness

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Chul Un; Chung, Bag Soon; Ahn, Jae Hyun; Yoon, Duk Ho; Jeong, Chul Young; Lim, Jong Dae [Korea Electric Power Research Institute, Taejon (Korea, Republic of); Kang, Sun Gu; Suk, Ho; Park, Sung Kyu; Lim, Hac Kyu; Lee, Kwang Nam [Korea Power Engineering Company Consulting and Architecture Engineers, (Korea, Republic of)

    1997-12-31

    In this study the severe accident sequences for each plant of concern, which represent accident sequences with a high core damage frequency and significant accident consequences, were selected based on the results of probabilistic safety assessments and source term and time-histories of various safety parameters under severe accidents. Accidents progression analysis for each selected accident sequence was performed by MAAP code. It was determined that the measured values, dose rate and radioisotope concentration, could provide information to the operators on occurrence and timing of core damage, reactor vessel failure, and containment failure during severe accidents. Radioactive concentration in the containment atmosphere, which may be measured by PASS, was estimated. Radioisotope concentration in emergency planning, evaluation of source term behavior in the containment, estimation of core damage degree, analysis of severe accident phenomena, core damage timing, and the amount of radioisotope released to the environment. (author). 50 refs., 60 figs.

  10. Light Source Estimation with Analytical Path-tracing

    OpenAIRE

    Kasper, Mike; Keivan, Nima; Sibley, Gabe; Heckman, Christoffer

    2017-01-01

    We present a novel algorithm for light source estimation in scenes reconstructed with a RGB-D camera based on an analytically-derived formulation of path-tracing. Our algorithm traces the reconstructed scene with a custom path-tracer and computes the analytical derivatives of the light transport equation from principles in optics. These derivatives are then used to perform gradient descent, minimizing the photometric error between one or more captured reference images and renders of our curre...

  11. Source-independent time-domain waveform inversion using convolved wavefields: Application to the encoded multisource waveform inversion

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2011-01-01

    Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate

  12. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  13. Estimates of ion sources in deciduous and coniferous throughfall

    Science.gov (United States)

    Puckett, L.J.

    1990-01-01

    Estimates of external and internal sources of ions in net throughfall deposition were derived for a deciduous and coniferous canopy by use of multiple regression. The externel source component appears to be dominated by dry deposition of Ca2+, SO2 and NO3- during dormant and growing seasons for the two canopy types. Increases in the leaching rates of K+ and Mg2+ during the growing season reflect the presence of leaves in the deciduous canopy and increased physiological activity in both canopies. Internal leaching rates for SO42- doubled during the growing season presumably caused by increased physiological activity and uptake of SO2 through stomates. Net deposition of SO42- in throughfall during the growing season appears highly dependent on stomatal uptake of SO2. Estimates of SO2 deposition velocities were 0.06 cm s-1 and 0.13 cm s-1 for the deciduous and coniferous canopies, respectively, during the dormant season, and 0.30 cm s-1 and 0.43 cm s-1 for the deciduous and coniferous canopies, respectively, during the growing season. For the ions of major interest with respect to ecosystem effects, namely H+, NO3- and SO42-, precipitation inputs generally outweighed estimates of dry deposition input. However, net throughfall deposition of NO3- and SO42- accounted for 20-47 and 34-50 per cent, respectively, of total deposition of those ions. Error estimates of ion sources were at least 50-100 per cent and the method is subject to several assumptions and limitations.

  14. Fractional Calculus and Shannon Wavelet

    Directory of Open Access Journals (Sweden)

    Carlo Cattani

    2012-01-01

    Full Text Available An explicit analytical formula for the any order fractional derivative of Shannon wavelet is given as wavelet series based on connection coefficients. So that for any 2(ℝ function, reconstructed by Shannon wavelets, we can easily define its fractional derivative. The approximation error is explicitly computed, and the wavelet series is compared with Grünwald fractional derivative by focusing on the many advantages of the wavelet method, in terms of rate of convergence.

  15. Data structure for estimating emissions from non-road sources

    Energy Technology Data Exchange (ETDEWEB)

    Sorenson, S C; Kalivoda, M; Vacarro, R; Trozzi, C; Samaras, Z; Lewis, C A

    1997-03-01

    The work described in the following is a portion of the MEET project (Methodologies for Estimation Air Pollutant Emissions from Transport). The overall goal of the MEET project is to consolidate and present methodologies which can be used to estimate air pollutant emissions from various types of traffic sources. One of the goals of MEET is to provide methodologies to be used in the COMMUTE project also funded by DG VII. COMMUTE is developing computer software which can be used to provide emissions inventories on the European scale. Although COMMUTE is viewed as a prime user of the information generated in MEET, the MEET results are intended to be used in a broader area, and on both smaller and larger spatial scales. The methodologies and data presented will be useful for planners on a more local scale than a national or continental basis. While most attention in previous years has been concentrated on emissions from road transport, it has become increasingly apparent in later years that the so-called off road transportation contributes significantly to the emission of air pollutants. The three most common off-road traffic modes are Air Traffic, Rail Traffic, and Ship or Marine traffic. In the following, the basic structure of the methods for estimating the emissions from these sectors will be given and of the input and output data associated with these calculations. The structures will of necessity be different for the different types of traffic. The data structures in the following reflect these variations and uncertainties. In some instances alternative approaches to emissions estimation will be suggested. The user must evaluate the amount and reliability of available data for the application at hand, and select the method which would be expected to give the highest accuracy. In any event, a large amount of uncertainty is inherent in the estimation of emissions from the non-road traffic sources, particularly those involving rail and maritime transport. (EG)

  16. Wavelet analysis in neurodynamics

    International Nuclear Information System (INIS)

    Pavlov, Aleksei N; Hramov, Aleksandr E; Koronovskii, Aleksei A; Sitnikova, Evgenija Yu; Makarov, Valeri A; Ovchinnikov, Alexey A

    2012-01-01

    Results obtained using continuous and discrete wavelet transforms as applied to problems in neurodynamics are reviewed, with the emphasis on the potential of wavelet analysis for decoding signal information from neural systems and networks. The following areas of application are considered: (1) the microscopic dynamics of single cells and intracellular processes, (2) sensory data processing, (3) the group dynamics of neuronal ensembles, and (4) the macrodynamics of rhythmical brain activity (using multichannel EEG recordings). The detection and classification of various oscillatory patterns of brain electrical activity and the development of continuous wavelet-based brain activity monitoring systems are also discussed as possibilities. (reviews of topical problems)

  17. Wavelets in physics

    CERN Document Server

    Fang, Li-Zhi

    1998-01-01

    Recent advances have shown wavelets to be an effective, and even necessary, mathematical tool for theoretical physics. This book is a timely overview of the progress of this new frontier. It includes an introduction to wavelet analysis, and applications in the fields of high energy physics, astrophysics, cosmology and statistical physics. The topics are selected for the interests of physicists and graduate students of theoretical studies. It emphasizes the need for wavelets in describing and revealing structure in physical problems, which is not easily accomplishing by other methods.

  18. Wavelets y sus aplicaciones

    OpenAIRE

    Castro, Liliana Raquel; Castro, Silvia Mabel

    1995-01-01

    Se presenta una introducción a la teorfa de wavelets. Ademas, se da una revisión histórica de cómo fueron introducidas las wavelets para la representación de funciones. Se efectúa una comparación entre la transformada wavelet y la transformada de Fourier. Por último, se presentan también algunas de los múltiples aplicaciones de esta nueva herramienta de análisis armónico.

  19. Chernobyl source term, atmospheric dispersion, and dose estimation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Harvey, T.F.; Lange, R.

    1988-02-01

    The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling, and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. These analyses indicated that essentially all of the noble gases, 80% of the radioiodines, 40% of the radiocesium, 10% of the tellurium, and about 1% or less of the more refractory elements were released. Atmospheric dispersion modeling of the radioactive cloud over the Northern Hemisphere revealed that the cloud became segmented during the first day, with the lower section heading toward Scandinavia and the uppper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. The inhalation doses due to direct cloud exposure were estimated to exceed 10 mGy near the Chernobyl area, to range between 0.1 and 0.001 mGy within most of Europe, and to be generally less than 0.00001 mGy within the US. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents, while the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests. 9 refs., 3 figs., 6 tabs

  20. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  1. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  2. Experimental study on source efficiencies for estimating surface contamination level

    International Nuclear Information System (INIS)

    Ichiji, Takeshi; Ogino, Haruyuki

    2008-01-01

    Source efficiency was measured experimentally for various materials, such as metals, nonmetals, flooring materials, sheet materials and other materials, contaminated by alpha and beta emitter radioactive nuclides. Five nuclides, 147 Pm, 60 Co, 137 Cs, 204 Tl and 90 Sr- 90 Y, were used as the beta emitters, and one nuclide 241 Am was used as the alpha emitter. The test samples were prepared by placing drops of the radioactive standardized solutions uniformly on the various materials using an automatic quantitative dispenser system from Musashi Engineering, Inc. After placing drops of the radioactive standardized solutions, the test materials were allowed to dry for more than 12 hours in a draft chamber with a hood. The radioactivity of each test material was about 30 Bq. Beta rays or alpha rays from the test materials were measured with a 2-pi gas flow proportional counter from Aloka Co., Ltd. The source efficiencies of the metals, nonmetals and sheet materials were higher than 0.5 in the case of contamination by the 137 Cs, 204 Tl and 90 Sr- 90 Y radioactive standardized solutions, higher than 0.4 in the case of contamination by the 60 Co radioactive standardized solution, and higher than 0.25 in the case of contamination by the alpha emitter the 241 Am radioactive standardized solution. These values were higher than those given in Japanese Industrial Standards (JIS) documents. In contrast, the source efficiencies of some permeable materials were lower than those given in JIS documents, because source efficiency varies depending on whether the materials or radioactive sources are wet or dry. This study provides basic data on source efficiency, which is useful for estimating the surface contamination level of materials. (author)

  3. The Exponent of High-frequency Source Spectral Falloff and Contribution to Source Parameter Estimates

    Science.gov (United States)

    Kiuchi, R.; Mori, J. J.

    2015-12-01

    As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.

  4. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    Science.gov (United States)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  5. Wavelets a primer

    CERN Document Server

    Blatter, Christian

    1998-01-01

    The Wavelet Transform has stimulated research that is unparalleled since the invention of the Fast Fourier Transform and has opened new avenues of applications in signal processing, image compression, radiology, cardiology, and many other areas. This book grew out of a short course for mathematics students at the ETH in Zurich; it provides a solid mathematical foundation for the broad range of applications enjoyed by the wavelet transform. Numerous illustrations and fully worked out examples enhance the book.

  6. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    Science.gov (United States)

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable

  7. [Fatal occupational accidents: estimates based on more data sources].

    Science.gov (United States)

    Baldasseroni, A; Chellini, E; Zoppi, O; Giovannetti, L

    2001-01-01

    The data reported by INAIL (Istituto Nazionale Assicurazione Infortuni sul Lavoro) on fatal occupational injuries have always been considered complete and reliable. The authors of this article verified the completeness of this information source crossing it with data bases existing in different registration systems (Regional Mortality Registry of Tuscany--RMR; registers and data of the Operative Units of Prevention, Hygiene and Safety in the Workplace--UOPISLL) for the period between 1992 and 1996. In the five years concerned, a total of 458 cases were reported. These cases could be considered fatal injuries at work without taking into account traffic accidents, which were not included in the present study. The results show that the most complete information source was RMR, reporting 80% of the total data, while INAIL reports only 62.2% of the total cases. On the contrary, the UOPISLL source is the least reliable. Using the capture/recapture method, the estimate of events in the period concerned (1992-1996) amounts to nearly 500 (499.8 LC 475.9-523.7), while the three sources systematically explored for the whole period (INAIL, RMR, UOSPILL) report 458 cases. An additional information source, the daily press, which could be systematically tested only two months for each of the five years, reports 10 additional cases, which were ignored by the 3 other sources, indirectly confirming in this way how reliable the performed estimate was. The main cases among the 157 fatal accidents reported by RMR, but not by INAIL, occurred among farmers (70), most of them already retired, but there were several fatal accidents reported in the construction sector (30). Other categories were included only in the RMR data because, in the period concerned, they were not covered by INAIL insurance (18 cases in the Army and Police, 7 on the railways). The survey that was carried out confirms the essential importance of INAIL data for the surveillance system applied to this phenomenon. This

  8. An application of time-frequency signal analysis technique to estimate the location of an impact source on a plate type structure

    International Nuclear Information System (INIS)

    Park, Jin Ho; Lee, Jeong Han; Choi, Young Chul; Kim, Chan Joong; Seong, Poong Hyun

    2005-01-01

    It has been reviewed whether it would be suitable that the application of the time-frequency signal analysis techniques to estimate the location of the impact source in plate structure. The STFT(Short Time Fourier Transform), WVD(Wigner-Ville distribution) and CWT(Continuous Wavelet Transform) methods are introduced and the advantages and disadvantages of those methods are described by using a simulated signal component. The essential of the above proposed techniques is to separate the traveling waves in both time and frequency domains using the dispersion characteristics of the structural waves. These time-frequency methods are expected to be more useful than the conventional time domain analyses for the impact localization problem on a plate type structure. Also it has been concluded that the smoothed WVD can give more reliable means than the other methodologies for the location estimation in a noisy environment

  9. Pseudo-stochastic signal characterization in wavelet-domain

    International Nuclear Information System (INIS)

    Zaytsev, Kirill I; Zhirnov, Andrei A; Alekhnovich, Valentin I; Yurchenko, Stanislav O

    2015-01-01

    In this paper we present the method for fast and accurate characterization of pseudo-stochastic signals, which contain a large number of similar but randomly-located fragments. This method allows estimating the statistical characteristics of pseudo-stochastic signal, and it is based on digital signal processing in wavelet-domain. Continuous wavelet transform and the criterion for wavelet scale power density are utilized. We are experimentally implementing this method for the purpose of sand granulometry, and we are estimating the statistical parameters of test sand fractions

  10. Improvement of electrocardiogram by empirical wavelet transform

    Science.gov (United States)

    Chanchang, Vikanda; Kumchaiseemak, Nakorn; Sutthiopad, Malee; Luengviriya, Chaiya

    2017-09-01

    Electrocardiogram (ECG) is a crucial tool in the detection of cardiac arrhythmia. It is also often used in a routine physical exam, especially, for elderly people. This graphical representation of electrical activity of heart is obtained by a measurement of voltage at the skin; therefore, the signal is always contaminated by noise from various sources. For a proper interpretation, the quality of the ECG should be improved by a noise reduction. In this article, we present a study of a noise filtration in the ECG by using an empirical wavelet transform (EWT). Unlike the traditional wavelet method, EWT is adaptive since the frequency spectrum of the ECG is taken into account in the construction of the wavelet basis. We show that the signal-to-noise ratio increases after the noise filtration for different noise artefacts.

  11. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  12. Time-frequency analysis of phonocardiogram signals using wavelet transform: a comparative study.

    Science.gov (United States)

    Ergen, Burhan; Tatar, Yetkin; Gulcur, Halil Ozcan

    2012-01-01

    Analysis of phonocardiogram (PCG) signals provides a non-invasive means to determine the abnormalities caused by cardiovascular system pathology. In general, time-frequency representation (TFR) methods are used to study the PCG signal because it is one of the non-stationary bio-signals. The continuous wavelet transform (CWT) is especially suitable for the analysis of non-stationary signals and to obtain the TFR, due to its high resolution, both in time and in frequency and has recently become a favourite tool. It decomposes a signal in terms of elementary contributions called wavelets, which are shifted and dilated copies of a fixed mother wavelet function, and yields a joint TFR. Although the basic characteristics of the wavelets are similar, each type of the wavelets produces a different TFR. In this study, eight real types of the most known wavelets are examined on typical PCG signals indicating heart abnormalities in order to determine the best wavelet to obtain a reliable TFR. For this purpose, the wavelet energy and frequency spectrum estimations based on the CWT and the spectra of the chosen wavelets were compared with the energy distribution and the autoregressive frequency spectra in order to determine the most suitable wavelet. The results show that Morlet wavelet is the most reliable wavelet for the time-frequency analysis of PCG signals.

  13. Lecture notes on wavelet transforms

    CERN Document Server

    Debnath, Lokenath

    2017-01-01

    This book provides a systematic exposition of the basic ideas and results of wavelet analysis suitable for mathematicians, scientists, and engineers alike. The primary goal of this text is to show how different types of wavelets can be constructed, illustrate why they are such powerful tools in mathematical analysis, and demonstrate their use in applications. It also develops the required analytical knowledge and skills on the part of the reader, rather than focus on the importance of more abstract formulation with full mathematical rigor.  These notes differs from many textbooks with similar titles in that a major emphasis is placed on the thorough development of the underlying theory before introducing applications and modern topics such as fractional Fourier transforms, windowed canonical transforms, fractional wavelet transforms, fast wavelet transforms, spline wavelets, Daubechies wavelets, harmonic wavelets and non-uniform wavelets. The selection, arrangement, and presentation of the material in these ...

  14. Target recognition by wavelet transform

    International Nuclear Information System (INIS)

    Li Zhengdong; He Wuliang; Zheng Xiaodong; Cheng Jiayuan; Peng Wen; Pei Chunlan; Song Chen

    2002-01-01

    Wavelet transform has an important character of multi-resolution power, which presents pyramid structure, and this character coincides the way by which people distinguish object from coarse to fineness and from large to tiny. In addition to it, wavelet transform benefits to reducing image noise, simplifying calculation, and embodying target image characteristic point. A method of target recognition by wavelet transform is provided

  15. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  16. Wavelets and quantum algebras

    International Nuclear Information System (INIS)

    Ludu, A.; Greiner, M.

    1995-09-01

    A non-linear associative algebra is realized in terms of translation and dilation operators, and a wavelet structure generating algebra is obtained. We show that this algebra is a q-deformation of the Fourier series generating algebra, and reduces to this for certain value of the deformation parameter. This algebra is also homeomorphic with the q-deformed su q (2) algebra and some of its extensions. Through this algebraic approach new methods for obtaining the wavelets are introduced. (author). 20 refs

  17. Electromagnetic spatial coherence wavelets

    International Nuclear Information System (INIS)

    Castaneda, R.; Garcia-Sucerquia, J.

    2005-10-01

    The recently introduced concept of spatial coherence wavelets is generalized for describing the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows analyzing the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides a further insight about the causal relationship between the polarization states at different planes along the propagation path. (author)

  18. Sources alimentaires et consommation estimée de CLA

    Directory of Open Access Journals (Sweden)

    Combe Nicole

    2005-01-01

    Full Text Available The term “conjugated linoleic acid” (CLA describes a group of geometrical and positional isomers of linoleic acid (18 : 2 9cis 12cis with double bonds in conjugated position. These isomers are the 18 : 2 8trans 10cis, 18 : 2 9cis 11trans, 18 : 2 10trans 12cis and 18 : 2 11cis 13trans. In human diet, the fats from ruminants are the natural source of these fatty acids (milk, meat…. CLAs is produced by the rumen anaerobic bacteria metabolism of linoleic fatty acid, the 18 : 2 9cis 11trans being the predominant isomer (up to 90% of total CLAs, and named for that reason “rumenic acid”. The CLA richest food is milk (2 - 40 mg/g of fat, depending on the animal feed, as well as butter, dairy products, followed by meat of ruminants. Vegetable oils and margarine contain only small amounts of CLAs (0 - 0.5 mg/g, originating from technological processes. Some significant quantities of CLAs are found in human breast milk, depending on women dietary habits (from 1.9 to 11.2 mg/g. The human consumption levels of CLAs have been estimated in different countries. With food questionnaires of the “3-7 days recall” or “semi-quantitative frequency” types, the population consumption has been estimated between 20 and 500 mg per day, with higher levels in men than in women. In Australia, the dietary intake may reach in some cases 1.5 g/day.

  19. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    Science.gov (United States)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  20. EEG Artifact Removal Using a Wavelet Neural Network

    Science.gov (United States)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  1. WAVELET ANALYSIS OF ABNORMAL ECGS

    Directory of Open Access Journals (Sweden)

    Vasudha Nannaparaju

    2014-02-01

    Full Text Available Detection of the warning signals by the heart can be diagnosed from ECG. An accurate and reliable diagnosis of ECG is very important however which is cumbersome and at times ambiguous in time domain due to the presence of noise. Study of ECG in wavelet domain using both continuous Wavelet transform (CWT and discrete Wavelet transform (DWT, with well known wavelet as well as a wavelet proposed by the authors for this investigation is found to be useful and yields fairly reliable results. In this study, Wavelet analysis of ECGs of Normal, Hypertensive, Diabetic and Cardiac are carried out. The salient feature of the study is that detection of P and T phases in wavelet domain is feasible which are otherwise feeble or absent in raw ECGs.

  2. Boosted bosons and wavelets

    CERN Document Server

    Søgaard, Andreas

    For the LHC Run 2 and beyond, experiments are pushing both the energy and the intensity frontier so the need for robust and efficient pile-up mitigation tools becomes ever more pressing. Several methods exist, relying on uniformity of pile-up, local correlations of charged to neutral particles, and parton shower shapes, all in $y − \\phi$ space. Wavelets are presented as tools for pile-up removal, utilising their ability to encode position and frequency information simultaneously. This allows for the separation of individual hadron collision events by angular scale and thus for subtracting of soft, diffuse/wide-angle contributions while retaining the hard, small-angle components from the hard event. Wavelet methods may utilise the same assumptions as existing methods, the difference being the underlying, novel representation. Several wavelet methods are proposed and their effect studied in simple toy simulation under conditions relevant for the LHC Run 2. One full pile-up mitigation tool (‘wavelet analysis...

  3. Wavelet based methods for improved wind profiler signal processing

    Directory of Open Access Journals (Sweden)

    V. Lehmann

    2001-08-01

    Full Text Available In this paper, we apply wavelet thresholding for removing automatically ground and intermittent clutter (airplane echoes from wind profiler radar data. Using the concept of discrete multi-resolution analysis and non-parametric estimation theory, we develop wavelet domain thresholding rules, which allow us to identify the coefficients relevant for clutter and to suppress them in order to obtain filtered reconstructions.Key words. Meteorology and atmospheric dynamics (instruments and techniques – Radio science (remote sensing; signal processing

  4. Characterization and Simulation of Gunfire with Wavelets

    Directory of Open Access Journals (Sweden)

    David O. Smallwood

    1999-01-01

    Full Text Available Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of these records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.

  5. Pedestrian detection based on redundant wavelet transform

    Science.gov (United States)

    Huang, Lin; Ji, Liping; Hu, Ping; Yang, Tiejun

    2016-10-01

    Intelligent video surveillance is to analysis video or image sequences captured by a fixed or mobile surveillance camera, including moving object detection, segmentation and recognition. By using it, we can be notified immediately in an abnormal situation. Pedestrian detection plays an important role in an intelligent video surveillance system, and it is also a key technology in the field of intelligent vehicle. So pedestrian detection has very vital significance in traffic management optimization, security early warn and abnormal behavior detection. Generally, pedestrian detection can be summarized as: first to estimate moving areas; then to extract features of region of interest; finally to classify using a classifier. Redundant wavelet transform (RWT) overcomes the deficiency of shift variant of discrete wavelet transform, and it has better performance in motion estimation when compared to discrete wavelet transform. Addressing the problem of the detection of multi-pedestrian with different speed, we present an algorithm of pedestrian detection based on motion estimation using RWT, combining histogram of oriented gradients (HOG) and support vector machine (SVM). Firstly, three intensities of movement (IoM) are estimated using RWT and the corresponding areas are segmented. According to the different IoM, a region proposal (RP) is generated. Then, the features of a RP is extracted using HOG. Finally, the features are fed into a SVM trained by pedestrian databases and the final detection results are gained. Experiments show that the proposed algorithm can detect pedestrians accurately and efficiently.

  6. Fault diagnosis for temperature, flow rate and pressure sensors in VAV systems using wavelet neural network

    Energy Technology Data Exchange (ETDEWEB)

    Du, Zhimin; Jin, Xinqiao; Yang, Yunyu [School of Mechanical Engineering, Shanghai Jiao Tong University, 800, Dongchuan Road, Shanghai (China)

    2009-09-15

    Wavelet neural network, the integration of wavelet analysis and neural network, is presented to diagnose the faults of sensors including temperature, flow rate and pressure in variable air volume (VAV) systems to ensure well capacity of energy conservation. Wavelet analysis is used to process the original data collected from the building automation first. With three-level wavelet decomposition, the series of characteristic information representing various operation conditions of the system are obtained. In addition, neural network is developed to diagnose the source of the fault. To improve the diagnosis efficiency, three data groups based on several physical models or balances are classified and constructed. Using the data decomposed by three-level wavelet, the neural network can be well trained and series of convergent networks are obtained. Finally, the new measurements to diagnose are similarly processed by wavelet. And the well-trained convergent neural networks are used to identify the operation condition and isolate the source of the fault. (author)

  7. Estimation of source parameters of Chamoli Earthquake, India

    Indian Academy of Sciences (India)

    R. Narasimhan, Krishtel eMaging Solutions

    meter studies, in different parts of the world. Singh et al (1979) and Sharma and Wason (1994, 1995) have calculated source parameters for Himalayan and nearby regions. To the best of this authors' knowledge, the source parameter studies using strong motion data have not been carried out in India so far, though similar ...

  8. Processing of pulse oximeter data using discrete wavelet analysis.

    Science.gov (United States)

    Lee, Seungjoon; Ibey, Bennett L; Xu, Weijian; Wilson, Mark A; Ericson, M Nance; Coté, Gerard L

    2005-07-01

    A wavelet-based signal processing technique was employed to improve an implantable blood perfusion monitoring system. Data was acquired from both in vitro and in vivo sources: a perfusion model and the proximal jejunum of an adult pig. Results showed that wavelet analysis could isolate perfusion signals from raw, periodic, in vitro data as well as fast Fourier transform (FFT) methods. However, for the quasi-periodic in vivo data segments, wavelet analysis provided more consistent results than the FFT analysis for data segments of 50, 10, and 5 s in length. Wavelet analysis has thus been shown to require less data points for quasi-periodic data than FFT analysis making it a good choice for an indwelling perfusion monitor where power consumption and reaction time are paramount.

  9. Wavelets in medical imaging

    International Nuclear Information System (INIS)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-01-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  10. Wavelets in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H. [Sharda University, SET, Department of Electronics and Communication, Knowledge Park 3rd, Gr. Noida (India); University of Kocaeli, Department of Mathematics, 41380 Kocaeli (Turkey); Istanbul Aydin University, Department of Computer Engineering, 34295 Istanbul (Turkey); Sharda University, SET, Department of Mathematics, 32-34 Knowledge Park 3rd, Greater Noida (India)

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  11. Applications of wavelet transforms for nuclear power plant signal analysis

    International Nuclear Information System (INIS)

    Seker, S.; Turkcan, E.; Upadhyaya, B.R.; Erbay, A.S.

    1998-01-01

    The safety of Nuclear Power Plants (NPPs) may be enhanced by the timely processing of information derived from multiple process signals from NPPs. The most widely used technique in signal analysis applications is the Fourier transform in the frequency domain to generate power spectral densities (PSD). However, the Fourier transform is global in nature and will obscure any non-stationary signal feature. Lately, a powerful technique called the Wavelet Transform, has been developed. This transform uses certain basis functions for representing the data in an effective manner, with capability for sub-band analysis and providing time-frequency localization as needed. This paper presents a brief overview of wavelets applied to the nuclear industry for signal processing and plant monitoring. The basic theory of Wavelets is also summarized. In order to illustrate the application of wavelet transforms data were acquired from the operating nuclear power plant Borssele in the Netherlands. The experimental data consist of various signals in the power plant and are selected from a stationary power operation. Their frequency characteristics and the mutual relations were investigated using MATLAB signal processing and wavelet toolbox for computing their PSDs and coherence functions by multi-resolution analysis. The results indicate that the sub-band PSD matches with the original signal PSD and enhances the estimation of coherence functions. The Wavelet analysis demonstrates the feasibility of application to stationary signals to provide better estimates in the frequency band of interest as compared to the classical FFT approach. (author)

  12. Estimates of radiation doses from various sources of exposure

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This chapter provides an overview of radiation doses to individuals and to the collective US population from various sources of ionizing radiation. Summary tables present doses from various sources of ionizing radiation. Summary tables present doses from occupational exposures and annual per capita doses from natural background, the healing arts, nuclear weapons, nuclear energy and consumer products. Although doses from non-ionizing radiation are not as yet readily available in a concise form, the major sources of non-ionizing radiation are listed

  13. Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed

    OpenAIRE

    Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji

    2004-01-01

    The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...

  14. A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum

    Directory of Open Access Journals (Sweden)

    Pan Liu

    2017-05-01

    Full Text Available This paper presents a wavelet-based Gaussian method (WGM for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF. The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.

  15. Time-Frequency-Wavenumber Analysis of Surface Waves Using the Continuous Wavelet Transform

    Science.gov (United States)

    Poggi, V.; Fäh, D.; Giardini, D.

    2013-03-01

    A modified approach to surface wave dispersion analysis using active sources is proposed. The method is based on continuous recordings, and uses the continuous wavelet transform to analyze the phase velocity dispersion of surface waves. This gives the possibility to accurately localize the phase information in time, and to isolate the most significant contribution of the surface waves. To extract the dispersion information, then, a hybrid technique is applied to the narrowband filtered seismic recordings. The technique combines the flexibility of the slant stack method in identifying waves that propagate in space and time, with the resolution of f- k approaches. This is particularly beneficial for higher mode identification in cases of high noise levels. To process the continuous wavelet transform, a new mother wavelet is presented and compared to the classical and widely used Morlet type. The proposed wavelet is obtained from a raised-cosine envelope function (Hanning type). The proposed approach is particularly suitable when using continuous recordings (e.g., from seismological-like equipment) since it does not require any hardware-based source triggering. This can be subsequently done with the proposed method. Estimation of the surface wave phase delay is performed in the frequency domain by means of a covariance matrix averaging procedure over successive wave field excitations. Thus, no record stacking is necessary in the time domain and a large number of consecutive shots can be used. This leads to a certain simplification of the field procedures. To demonstrate the effectiveness of the method, we tested it on synthetics as well on real field data. For the real case we also combine dispersion curves from ambient vibrations and active measurements.

  16. Estimating regional centile curves from mixed data sources and countries

    NARCIS (Netherlands)

    Buuren, S. van; Hayes, D.J.; Stasinopoulos, D.M.; Rigby, R.A.; Kuile, F.O. ter; Terlouw, D.J.

    2009-01-01

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating

  17. Identifying sources and estimating glandular output of salivary TIMP-1

    DEFF Research Database (Denmark)

    Holten-Andersen, Lars; Jensen, Siri Beier; Jensen, Allan Bardow

    2008-01-01

    saliva (267.01 ng/min). Conclusion. This study shows that saliva contains authentic TIMP-1, the concentration of which was found to depend on gland type and salivary flow. Stimulated whole saliva is suggested as a reliable and easily accessible source for TIMP-1 determinations in bodily fluids....

  18. Active Power Deficit Estimation in Presence of Renewable Energy Sources

    DEFF Research Database (Denmark)

    Hoseinzadeh, Bakhtyar; Silva, Filipe Miguel Faria da; Bak, Claus Leth

    2015-01-01

    The inertia of the power system is reduced in the presence of Renewable Energy Sources (RESs) due to their low or even no contribution in the inertial response as it is inherently available in the Synchronous Machines (SMs). The total inertia of the grid becomes unknown or at least uncertain...

  19. Joint multifractal analysis based on wavelet leaders

    Science.gov (United States)

    Jiang, Zhi-Qiang; Yang, Yan-Hong; Wang, Gang-Jin; Zhou, Wei-Xing

    2017-12-01

    Mutually interacting components form complex systems and these components usually have long-range cross-correlated outputs. Using wavelet leaders, we propose a method for characterizing the joint multifractal nature of these long-range cross correlations; we call this method joint multifractal analysis based on wavelet leaders (MF-X-WL). We test the validity of the MF-X-WL method by performing extensive numerical experiments on dual binomial measures with multifractal cross correlations and bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MF-X-WL is capable of detecting cross correlations in synthetic data with acceptable estimating errors. We also apply the MF-X-WL method to pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and determine intriguing joint multifractal behavior.

  20. Noise reduction by wavelet thresholding

    National Research Council Canada - National Science Library

    Jansen, Maarten

    2001-01-01

    .... I rather present new material and own insights in the que stions involved with wavelet based noise reduction . On the other hand , the presented material does cover a whole range of methodologies, and in that sense, the book may serve as an introduction into the domain of wavelet smoothing. Throughout the text, three main properties show up ever again: spar...

  1. The Hyperloop as a Source of Interesting Estimation Questions

    Science.gov (United States)

    Allain, Rhett

    2014-03-01

    The Hyperloop is a conceptual high speed transportation system proposed by Elon Musk. The basic idea uses passenger capsules inside a reduced pressure tube. Even though the actual physics of dynamic air flow in a confined space can be complicated, there are a multitude estimation problems that can be addressed. These back-of-the-envelope questions can be approximated by physicists of all levels as well as the general public and serve as a great example of the fundamental aspects of physics.

  2. A generalized wavelet extrema representation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  3. Wavelet frames and their duals

    DEFF Research Database (Denmark)

    Lemvig, Jakob

    2008-01-01

    frames with good time localization and other attractive properties. Furthermore, the dual wavelet frames are constructed in such a way that we are guaranteed that both frames will have the same desirable features. The construction procedure works for any real, expansive dilation. A quasi-affine system....... The signals are then represented by linear combinations of the building blocks with coefficients found by an associated frame, called a dual frame. A wavelet frame is a frame where the building blocks are stretched (dilated) and translated versions of a single function; such a frame is said to have wavelet...... structure. The dilation of the wavelet building blocks in higher dimension is done via a square matrix which is usually taken to be integer valued. In this thesis we step away from the "usual" integer, expansive dilation and consider more general, expansive dilations. In most applications of wavelet frames...

  4. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  5. Wavelet Enhanced Appearance Modelling

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren; Cootes, Timothy F.

    2004-01-01

    Generative segmentation methods such as the Active Appearance Models (AAM) establish dense correspondences by modelling variation of shape and pixel intensities. Alas, for 3D and high-resolution 2D images typical in medical imaging, this approach is rendered infeasible due to excessive storage......-7 wavelets on face images have shown that segmentation accuracy degrades gracefully with increasing compression ratio. Further, a proposed weighting scheme emphasizing edges was shown to be significantly more accurate at compression ratio 1:1, than a conventional AAM. At higher compression ratios the scheme...

  6. Adaptive algorithms for a self-shielding wavelet-based Galerkin method

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.

    2009-01-01

    The treatment of the energy variable in deterministic neutron transport methods is based on a multigroup discretization, considering the flux and cross-sections to be constant within a group. In this case, a self-shielding calculation is mandatory to correct sections of resonant isotopes. In this paper, a different approach based on a finite element discretization on a wavelet basis is used. We propose adaptive algorithms constructed from error estimates. Such an approach is applied to within-group scattering source iterations. A first implementation is presented in the special case of the fine structure equation for an infinite homogeneous medium. Extension to spatially-dependent cases is discussed. (authors)

  7. Estimating and Testing the Sources of Evoked Potentials in the Brain.

    Science.gov (United States)

    Huizenga, Hilde M.; Molenaar, Peter C. M.

    1994-01-01

    The source of an event-related brain potential (ERP) is estimated from multivariate measures of ERP on the head under several mathematical and physical constraints on the parameters of the source model. Statistical aspects of estimation are discussed, and new tests are proposed. (SLD)

  8. A wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry

    Science.gov (United States)

    Wang, Jianhua; Yang, Yanxi

    2018-05-01

    We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.

  9. Source term estimation via monitoring data and its implementation to the RODOS system

    International Nuclear Information System (INIS)

    Bohunova, J.; Duranova, T.

    2000-01-01

    A methodology and computer code for interpretation of environmental data, i.e. source term assessment, from on-line environmental monitoring network was developed. The method is based on the conversion of measured dose rates to the source term, i.e. airborne radioactivity release rate, taking into account real meteorological data and location of the monitoring points. The bootstrap estimation methodology and bipivot method to estimate the source term from on-site gamma dose rate monitors is used. The mentioned methods provide an estimate of the mean value of the source term and a confidence interval for it. (author)

  10. Estimation of light source colours for light pollution assessment.

    Science.gov (United States)

    Ziou, D; Kerouh, F

    2018-05-01

    The concept of the smart city raised several technological and scientific issues including light pollution. There are various negative impacts of light pollution on economy, ecology, and heath. This paper deals with the census of the colour of light emitted by lamps used in a city environment. To this end, we derive a light bulb colour estimator based on Bayesian reasoning, directional data, and image formation model in which the usual concept of reflectance is not used. All choices we made are devoted to designing an algorithm which can be run almost in real-time. Experimental results show the effectiveness of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Vapor Intrusion Estimation Tool for Unsaturated Zone Contaminant Sources. User’s Guide

    Science.gov (United States)

    2016-08-30

    estimation process when applying the tool. The tool described here is focused on vapor-phase diffusion from the current vadose zone source , and is not...from the current defined vadose zone source ). The estimated soil gas contaminant concentration obtained from the pre-modeled scenarios for a building...need a full site-specific numerical model to assess the impacts beyond the current vadose zone source . 35 5.0 References Brennan, R.A., N

  12. Multifractal Cross Wavelet Analysis

    Science.gov (United States)

    Jiang, Zhi-Qiang; Gao, Xing-Lu; Zhou, Wei-Xing; Stanley, H. Eugene

    Complex systems are composed of mutually interacting components and the output values of these components usually exhibit long-range cross-correlations. Using wavelet analysis, we propose a method of characterizing the joint multifractal nature of these long-range cross correlations, a method we call multifractal cross wavelet analysis (MFXWT). We assess the performance of the MFXWT method by performing extensive numerical experiments on the dual binomial measures with multifractal cross correlations and the bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. For binomial multifractal measures, we find the empirical joint multifractality of MFXWT to be in approximate agreement with the theoretical formula. For bFBMs, MFXWT may provide spurious multifractality because of the wide spanning range of the multifractal spectrum. We also apply the MFXWT method to stock market indices, and in pairs of index returns and volatilities we find an intriguing joint multifractal behavior. The tests on surrogate series also reveal that the cross correlation behavior, particularly the cross correlation with zero lag, is the main origin of cross multifractality.

  13. An Introduction to Wavelet Theory and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  14. Maximum likelihood estimation of the position of a radiating source in a waveguide

    International Nuclear Information System (INIS)

    Hinich, M.J.

    1979-01-01

    An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

  15. Estimating regional centile curves from mixed data sources and countries.

    Science.gov (United States)

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  16. Forced Ignition Study Based On Wavelet Method

    Science.gov (United States)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  17. Estimation of subcriticality by neutron source multiplication method

    International Nuclear Information System (INIS)

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  18. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    Science.gov (United States)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  19. Wavelet Filter Banks for Super-Resolution SAR Imaging

    Science.gov (United States)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  20. Development of the methodology for estimation of dose from a source

    International Nuclear Information System (INIS)

    Golebaone, E.M.

    2012-04-01

    The geometry of a source plays an important role when determining which method to apply in order to accurately estimate dose from a source. If wrong source geometry is used the dose received may be underestimated or overestimated therefore this may lead to wrong decision in dealing with the exposure situation. In this project moisture density gauge was used to represent a point source in order to demonstrate the key parameters to be used when estimating dose from point source. The parameters to be considered are activity of the source, the ambient dose rate, gamma constant for the radionuclide, as well as the transport index on the package of the source. The distance from the source, and the time spent in the radiation field must be known in order to calculate the dose. (author)

  1. Wavelet spectra of JACEE events

    International Nuclear Information System (INIS)

    Suzuki, Naomichi; Biyajima, Minoru; Ohsawa, Akinori.

    1995-01-01

    Pseudo-rapidity distributions of two high multiplicity events Ca-C and Si-AgBr observed by the JACEE are analyzed by a wavelet transform. Wavelet spectra of those events are calculated and compared with the simulation calculations. The wavelet spectrum of the Ca-C event somewhat resembles that simulated with the uniform random numbers. That of Si-AgBr event, however, is not reproduced by simulation calculations with Poisson random numbers, uniform random numbers, or a p-model. (author)

  2. Iris Recognition Using Wavelet

    Directory of Open Access Journals (Sweden)

    Khaliq Masood

    2013-08-01

    Full Text Available Biometric systems are getting more attention in the present era. Iris recognition is one of the most secure and authentic among the other biometrics and this field demands more authentic, reliable and fast algorithms to implement these biometric systems in real time. In this paper, an efficient localization technique is presented to identify pupil and iris boundaries using histogram of the iris image. Two small portions of iris have been used for polar transformation to reduce computational time and to increase the efficiency of the system. Wavelet transform is used for feature vector generation. Rotation of iris is compensated without shifts in the iris code. System is tested on Multimedia University Iris Database and results show that proposed system has encouraging performance.

  3. Gamma Splines and Wavelets

    Directory of Open Access Journals (Sweden)

    Hannu Olkkonen

    2013-01-01

    Full Text Available In this work we introduce a new family of splines termed as gamma splines for continuous signal approximation and multiresolution analysis. The gamma splines are born by -times convolution of the exponential by itself. We study the properties of the discrete gamma splines in signal interpolation and approximation. We prove that the gamma splines obey the two-scale equation based on the polyphase decomposition. to introduce the shift invariant gamma spline wavelet transform for tree structured subscale analysis of asymmetric signal waveforms and for systems with asymmetric impulse response. Especially we consider the applications in biomedical signal analysis (EEG, ECG, and EMG. Finally, we discuss the suitability of the gamma spline signal processing in embedded VLSI environment.

  4. Neural networks and wavelet analysis in the computer interpretation of pulse oximetry data

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.; Skokowski, P.G.; Leach, R.R. Jr.

    1996-03-01

    Pulse oximeters determine the oxygen saturation level of blood by measuring the light absorption of arterial blood. The sensor consists of red and infrared light sources and photodetectors. A method based on neural networks and wavelet analysis is developed for improved saturation estimation in the presence of sensor motion. Spectral and correlation functions of the dual channel oximetry data are used by a backpropagation neural network to characterize the type of motion. Amplitude ratios of red to infrared signals as a function of time scale are obtained from the multiresolution wavelet decomposition of the two-channel data. Motion class and amplitude ratios are then combined to obtain a short-time estimate of the oxygen saturation level. A final estimate of oxygen saturation is obtained by applying a 15 s smoothing filter on the short-time measurements based on 3.5 s windows sampled every 1.75 s. The design employs two backpropagation neural networks. The first neural network determines the motion characteristics and the second network determines the saturation estimate. Our approach utilizes waveform analysis in contrast to the standard algorithms that are based on the successful detection of peaks and troughs in the signal. The proposed algorithm is numerically efficient and has stable characteristics with a reduced false alarm rate with a small loss in detection. The method can be rapidly developed on a digital signal processing platform.

  5. Wavelets and multiscale signal processing

    CERN Document Server

    Cohen, Albert

    1995-01-01

    Since their appearance in mid-1980s, wavelets and, more generally, multiscale methods have become powerful tools in mathematical analysis and in applications to numerical analysis and signal processing. This book is based on "Ondelettes et Traitement Numerique du Signal" by Albert Cohen. It has been translated from French by Robert D. Ryan and extensively updated by both Cohen and Ryan. It studies the existing relations between filter banks and wavelet decompositions and shows how these relations can be exploited in the context of digital signal processing. Throughout, the book concentrates on the fundamentals. It begins with a chapter on the concept of multiresolution analysis, which contains complete proofs of the basic results. The description of filter banks that are related to wavelet bases is elaborated in both the orthogonal case (Chapter 2), and in the biorthogonal case (Chapter 4). The regularity of wavelets, how this is related to the properties of the filters and the importance of regularity for t...

  6. From Fourier analysis to wavelets

    CERN Document Server

    Gomes, Jonas

    2015-01-01

    This text introduces the basic concepts of function spaces and operators, both from the continuous and discrete viewpoints.  Fourier and Window Fourier Transforms are introduced and used as a guide to arrive at the concept of Wavelet transform.  The fundamental aspects of multiresolution representation, and its importance to function discretization and to the construction of wavelets is also discussed. Emphasis is given on ideas and intuition, avoiding the heavy computations which are usually involved in the study of wavelets.  Readers should have a basic knowledge of linear algebra, calculus, and some familiarity with complex analysis.  Basic knowledge of signal and image processing is desirable. This text originated from a set of notes in Portuguese that the authors wrote for a wavelet course on the Brazilian Mathematical Colloquium in 1997 at IMPA, Rio de Janeiro.

  7. Wavelet analysis for nonstationary signals

    International Nuclear Information System (INIS)

    Penha, Rosani Maria Libardi da

    1999-01-01

    Mechanical vibration signals play an important role in anomalies identification resulting of equipment malfunctioning. Traditionally, Fourier spectral analysis is used where the signals are assumed to be stationary. However, occasional transient impulses and start-up process are examples of nonstationary signals that can be found in mechanical vibrations. These signals can provide important information about the equipment condition, as early fault detection. The Fourier analysis can not adequately be applied to nonstationary signals because the results provide data about the frequency composition averaged over the duration of the signal. In this work, two methods for nonstationary signal analysis are used: Short Time Fourier Transform (STFT) and wavelet transform. The STFT is a method of adapting Fourier spectral analysis for nonstationary application to time-frequency domain. To have a unique resolution throughout the entire time-frequency domain is its main limitation. The wavelet transform is a new analysis technique suitable to nonstationary signals, which handles the STFT drawbacks, providing multi-resolution frequency analysis and time localization in a unique time-scale graphic. The multiple frequency resolutions are obtained by scaling (dilatation/compression) the wavelet function. A comparison of the conventional Fourier transform, STFT and wavelet transform is made applying these techniques to: simulated signals, arrangement rotor rig vibration signal and rotate machine vibration signal Hanning window was used to STFT analysis. Daubechies and harmonic wavelets were used to continuos, discrete and multi-resolution wavelet analysis. The results show the Fourier analysis was not able to detect changes in the signal frequencies or discontinuities. The STFT analysis detected the changes in the signal frequencies, but with time-frequency resolution problems. The wavelet continuos and discrete transform demonstrated to be a high efficient tool to detect

  8. A wavelet phase filter for emission tomography

    International Nuclear Information System (INIS)

    Olsen, E.T.; Lin, B.

    1995-01-01

    The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2π). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods

  9. Source term estimation during incident response to severe nuclear power plant accidents. Draft

    Energy Technology Data Exchange (ETDEWEB)

    McKenna, T J; Giitter, J

    1987-07-01

    The various methods of estimating radionuclide release to the environment (source terms) as a result of an accident at a nuclear power reactor are discussed. The major factors affecting potential radionuclide releases off site (source terms) as a result of nuclear power plant accidents are described. The quantification of these factors based on plant instrumentation also is discussed. A range of accident conditions from those within the design basis to the most severe accidents possible are included in the text. A method of gross estimation of accident source terms and their consequences off site is presented. The goal is to present a method of source term estimation that reflects the current understanding of source term behavior and that can be used during an event. (author)

  10. Source term estimation during incident response to severe nuclear power plant accidents. Draft

    International Nuclear Information System (INIS)

    McKenna, T.J.; Giitter, J.

    1987-01-01

    The various methods of estimating radionuclide release to the environment (source terms) as a result of an accident at a nuclear power reactor are discussed. The major factors affecting potential radionuclide releases off site (source terms) as a result of nuclear power plant accidents are described. The quantification of these factors based on plant instrumentation also is discussed. A range of accident conditions from those within the design basis to the most severe accidents possible are included in the text. A method of gross estimation of accident source terms and their consequences off site is presented. The goal is to present a method of source term estimation that reflects the current understanding of source term behavior and that can be used during an event. (author)

  11. Source term estimation during incident response to severe nuclear power plant accidents

    International Nuclear Information System (INIS)

    McKenna, T.J.; Glitter, J.G.

    1988-10-01

    This document presents a method of source term estimation that reflects the current understanding of source term behavior and that can be used during an event. The various methods of estimating radionuclide release to the environment (source terms) as a result of an accident at a nuclear power reactor are discussed. The major factors affecting potential radionuclide releases off site (source terms) as a result of nuclear power plant accidents are described. The quantification of these factors based on plant instrumentation also is discussed. A range of accident conditions from those within the design basis to the most severe accidents possible are included in the text. A method of gross estimation of accident source terms and their consequences off site is presented. 39 refs., 48 figs., 19 tabs

  12. A Novel Method Based on Oblique Projection Technology for Mixed Sources Estimation

    Directory of Open Access Journals (Sweden)

    Weijian Si

    2014-01-01

    Full Text Available Reducing the computational complexity of the near-field sources and far-field sources localization algorithms has been considered as a serious problem in the field of array signal processing. A novel algorithm caring for mixed sources location estimation based on oblique projection is proposed in this paper. The sources are estimated at two different stages and the sensor noise power is estimated and eliminated from the covariance which improve the accuracy of the estimation of mixed sources. Using the idea of compress, the range information of near-field sources is obtained by searching the partial area instead of the whole Fresnel area which can reduce the processing time. Compared with the traditional algorithms, the proposed algorithm has the lower computation complexity and has the ability to solve the two closed-spaced sources with high resolution and accuracy. The duplication of range estimation is also avoided. Finally, simulation results are provided to demonstrate the performance of the proposed method.

  13. Wavelet-Transform-Based Power Management of Hybrid Vehicles with Multiple On-board Energy Sources Including Fuel Cell, Battery and Ultracapacitor

    Science.gov (United States)

    2008-09-12

    considered to be promising for application as distributed generation sources due to high efficiency and compactness [1-2], [21-24]. The PEMFC is...also a primary candidate for environment-friendly vehicles. The nomenclatures of the PEMFC are as follows: B , C : Constants to calculate the...0 O H H-O H-O 1 2 N I q q r r FU = (10) The block diagram of the PEMFC model based on the above equations is shown in Fig

  14. Signal Analysis by New Mother Wavelets

    International Nuclear Information System (INIS)

    Niu Jinbo; Qi Kaiguo; Fan Hongyi

    2009-01-01

    Based on the general formula for finding qualified mother wavelets [Opt. Lett. 31 (2006) 407] we make wavelet transforms computed with the newly found mother wavelets (characteristic of the power 2n) for some optical Gaussian pulses, which exhibit the ability to measure frequency of the pulse more precisely and clearly. We also work with complex mother wavelets composed of new real mother wavelets, which offer the ability of obtaining phase information of the pulse as well as amplitude information. The analogy between the behavior of Hermite-Gauss beams and that of new wavelet transforms is noticed. (general)

  15. Blind Component Separation in Wavelet Space: Application to CMB Analysis

    Directory of Open Access Journals (Sweden)

    J. Delabrouille

    2005-09-01

    Full Text Available It is a recurrent issue in astronomical data analysis that observations are incomplete maps with missing patches or intentionally masked parts. In addition, many astrophysical emissions are nonstationary processes over the sky. All these effects impair data processing techniques which work in the Fourier domain. Spectral matching ICA (SMICA is a source separation method based on spectral matching in Fourier space designed for the separation of diffuse astrophysical emissions in cosmic microwave background observations. This paper proposes an extension of SMICA to the wavelet domain and demonstrates the effectiveness of wavelet-based statistics for dealing with gaps in the data.

  16. Hazardous Source Estimation Using an Artificial Neural Network, Particle Swarm Optimization and a Simulated Annealing Algorithm

    NARCIS (Netherlands)

    Wang, Rongxiao; Chen, B.; Qiu, S.; Ma, Liang; Zhu, Zhengqiu; Wang, Yiping; Qiu, Xiaogang

    2018-01-01

    Locating and quantifying the emission source plays a significant role in the emergency management of hazardous gas leak accidents. Due to the lack of a desirable atmospheric dispersion model, current source estimation algorithms cannot meet the requirements of both accuracy and efficiency. In

  17. A simple output voltage control scheme for single phase wavelet ...

    African Journals Online (AJOL)

    DR OKE

    of the wavelet modulated (WM) scheme is that a single synthesis function, derived ... a single-phase H-bridge voltage-source (VS) inverter using MATLAB simulations. ... reconstruction process has been suggested to device a new class of ...

  18. Diagnostics of detector tube impacting with wavelet techniques

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [KFKI-AEKI Applied Reactor Physics, Budapest (Hungary); Pazsit, I. [Chalmers Univ. of Tech., Goeteborg (Sweden). Dept. of Reactor Physics

    1997-12-08

    A neutron noise based method is proposed for the detection of impacting of detector tubes in BWRs. The basic idea relies on the assumption that non-stationary transients (e.g. fuel box vibrations) may be induced at impacting. Such short-lived transients are difficult to detect by spectral analysis methods. However, their presence in the detector signal can be detected by wavelet analysis. A simple wavelet technique, the so-called Haar transform, is suggested for the detection of impacting. Tests of the proposed method have been performed with success on both simulated data with controlled impacting as well as with real measurement data. The simulation model as well as the results of the wavelet analysis are reported in this paper. The source code written in MATLAB are available at a public ftp site. The necessary information to reproduce the simulation results is also reported. (author).

  19. Diagnostics of detector tube impacting with wavelet techniques

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Pazsit, I

    1998-04-01

    A neutron noise based method is proposed for the detection of impacting of detector tubes in BWRs. The basic idea relies on the assumption that non-stationary transients (e.g. fuel box vibrations) may be induced at impacting. Such short-lived transients are difficult to detect by spectral analysis methods. However, their presence in the detector signal can be detected by wavelet analysis. A simple wavelet technique, the so-called Haar transform, is suggested for the detection of impacting. Tests of the proposed method have been performed with success on both simulated data with controlled impacting as well as with real measurement data. The simulation model as well as the results of the wavelet analysis are reported in this paper. The source codes written in MATLAB[reg] are available at a public ftp site. The necessary information to reproduce the simulation results is also reported.

  20. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and...

  1. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  2. Analysis of safety information for nuclear power plants and development of source term estimation program

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Choi, Seong Soo; Park, Jin Hee

    1999-12-01

    Current CARE(Computerized Advisory System for Radiological Emergency) in KINS(Korea Institute of Nuclear Safety) has no STES(Source Term Estimation System) which links between SIDS(Safety Information Display System) and FADAS(Following Accident Dose Assessment System). So in this study, STES is under development. STES system is the system that estimates the source term based on the safety information provided by SIDS. Estimated source term is given to FADAS as an input for estimation of environmental effect of radiation. Through this first year project STES for the Kori 3,4 and Younggwang 1,2 has been developed. Since there is no CARE for Wolsong(PHWR) plants yet, CARE for Wolsong is under construction. The safety parameters are selected and the safety information display screens and the alarm logic for plant status change are developed for Wolsong Unit 2 based on the design documents for CANDU plants

  3. Matching pursuit and source deflation for sparse EEG/MEG dipole moment estimation.

    Science.gov (United States)

    Wu, Shun Chi; Swindlehurst, A Lee

    2013-08-01

    In this paper, we propose novel matching pursuit (MP)-based algorithms for EEG/MEG dipole source localization and parameter estimation for multiple measurement vectors with constant sparsity. The algorithms combine the ideas of MP for sparse signal recovery and source deflation, as employed in estimation via alternating projections. The source-deflated matching pursuit (SDMP) approach mitigates the problem of residual interference inherent in sequential MP-based methods or recursively applied (RAP)-MUSIC. Furthermore, unlike prior methods based on alternating projection, SDMP allows one to efficiently estimate the dipole orientation in addition to its location. Simulations show that the proposed algorithms outperform existing techniques under various conditions, including those with highly correlated sources. Results using real EEG data from auditory experiments are also presented to illustrate the performance of these algorithms.

  4. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    Science.gov (United States)

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  5. DOA and Pitch Estimation of Audio Sources using IAA-based Filtering

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    For decades, it has been investigated how to separately solve the problems of both direction-of-arrival (DOA) and pitch estimation. Recently, it was found that estimating these parameters jointly from multichannel recordings of audio can be extremely beneficial. Many joint estimators are based...... on knowledge of the inverse sample covariance matrix. Typically, this covariance is estimated using the sample covariance matrix, but for this estimate to be full rank, many temporal samples are needed. In cases with non-stationary signals, this is a serious limitation. We therefore investigate how a recent...... joint DOA and pitch filtering-based estimator can be combined with the iterative adaptive approach to circumvent this limitation in joint DOA and pitch estimation of audio sources. Simulations show a clear improvement compared to when using the sample covariance matrix and the considered approach also...

  6. A simple algorithm for estimation of source-to-detector distance in Compton imaging

    International Nuclear Information System (INIS)

    Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.

    2008-01-01

    Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data

  7. Wavelets: Applications to Image Compression-II

    Indian Academy of Sciences (India)

    Wavelets: Applications to Image Compression-II. Sachin P ... successful application of wavelets in image com- ... b) Soft threshold: In this case, all the coefficients x ..... [8] http://www.jpeg.org} Official site of the Joint Photographic Experts Group.

  8. Wavelet Transforms using VTK-m

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shaomeng [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    These are a set of slides that deal with the topics of wavelet transforms using VTK-m. First, wavelets are discussed and detailed, then VTK-m is discussed and detailed, then wavelets and VTK-m are looked at from a performance comparison, then from an accuracy comparison, and finally lessons learned, conclusion, and what is next. Lessons learned are the following: Launching worklets is expensive; Natural logic of performing 2D wavelet transform: Repeat the same 1D wavelet transform on every row, repeat the same 1D wavelet transform on every column, invoke the 1D wavelet worklet every time: num_rows x num_columns; VTK-m approach of performing 2D wavelet transform: Create a worklet for 2D that handles both rows and columns, invoke this new worklet only one time; Fast calculation, but cannot reuse 1D implementations.

  9. From Calculus to Wavelets: ANew Mathematical Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 4. From Calculus to Wavelets: A New Mathematical Technique Wavelet Analysis Physical Properties. Gerald B Folland. General Article Volume 2 Issue 4 April 1997 pp 25-37 ...

  10. Reassessment of the technical bases for estimating source terms. Final report

    International Nuclear Information System (INIS)

    Silberberg, M.; Mitchell, J.A.; Meyer, R.O.; Ryder, C.P.

    1986-07-01

    This document describes a major advance in the technology for calculating source terms from postulated accidents at US light-water reactors. The improved technology consists of (1) an extensive data base from severe accident research programs initiated following the TMI accident, (2) a set of coupled and integrated computer codes (the Source Term Code Package), which models key aspects of fission product behavior under severe accident conditions, and (3) a number of detailed mechanistic codes that bridge the gap between the data base and the Source Term Code Package. The improved understanding of severe accident phenonmena has also allowed an identification of significant sources of uncertainty, which should be considered in estimating source terms. These sources of uncertainty are also described in this document. The current technology provides a significant improvement in evaluating source terms over that available at the time of the Reactor Safety Study (WASH-1400) and, because of this significance, the Nuclear Regulatory Commission staff is recommending its use

  11. Texture analysis using Gabor wavelets

    Science.gov (United States)

    Naghdy, Golshah A.; Wang, Jian; Ogunbona, Philip O.

    1996-04-01

    Receptive field profiles of simple cells in the visual cortex have been shown to resemble even- symmetric or odd-symmetric Gabor filters. Computational models employed in the analysis of textures have been motivated by two-dimensional Gabor functions arranged in a multi-channel architecture. More recently wavelets have emerged as a powerful tool for non-stationary signal analysis capable of encoding scale-space information efficiently. A multi-resolution implementation in the form of a dyadic decomposition of the signal of interest has been popularized by many researchers. In this paper, Gabor wavelet configured in a 'rosette' fashion is used as a multi-channel filter-bank feature extractor for texture classification. The 'rosette' spans 360 degrees of orientation and covers frequencies from dc. In the proposed algorithm, the texture images are decomposed by the Gabor wavelet configuration and the feature vectors corresponding to the mean of the outputs of the multi-channel filters extracted. A minimum distance classifier is used in the classification procedure. As a comparison the Gabor filter has been used to classify the same texture images from the Brodatz album and the results indicate the superior discriminatory characteristics of the Gabor wavelet. With the test images used it can be concluded that the Gabor wavelet model is a better approximation of the cortical cell receptive field profiles.

  12. Estimation of sediment sources using selected chemical tracers in the Perry lake basin, Kansas, USA

    Science.gov (United States)

    Juracek, K.E.; Ziegler, A.C.

    2009-01-01

    The ability to achieve meaningful decreases in sediment loads to reservoirs requires a determination of the relative importance of sediment sources within the contributing basins. In an investigation of sources of fine-grained sediment (clay and silt) within the Perry Lake Basin in northeast Kansas, representative samples of channel-bank sources, surface-soil sources (cropland and grassland), and reservoir bottom sediment were collected, chemically analyzed, and compared. The samples were sieved to isolate the TOC), and 137Cs were selected for use in the estimation of sediment sources. To further account for differences in particle-size composition between the sources and the reservoir bottom sediment, constituent ratio and clay-normalization techniques were used. Computed ratios included TOC to TN, TOC to TP, and TN to TP. Constituent concentrations (TN, TP, TOC) and activities (137Cs) were normalized by dividing by the percentage of clay. Thus, the sediment-source estimations involved the use of seven sediment-source indicators. Within the Perry Lake Basin, the consensus of the seven indicators was that both channel-bank and surface-soil sources were important in the Atchison County Lake and Banner Creek Reservoir subbasins, whereas channel-bank sources were dominant in the Mission Lake subbasin. On the sole basis of 137Cs activity, surface-soil sources contributed the most fine-grained sediment to Atchison County Lake, and channel-bank sources contributed the most fine-grained sediment to Banner Creek Reservoir and Mission Lake. Both the seven-indicator consensus and 137Cs indicated that channel-bank sources were dominant for Perry Lake and that channel-bank sources increased in importance with distance downstream in the basin. ?? 2009 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  13. Blind estimation of the number of speech source in reverberant multisource scenarios based on binaural signals

    DEFF Research Database (Denmark)

    May, Tobias; van de Par, Steven

    2012-01-01

    In this paper we present a new approach for estimating the number of active speech sources in the presence of interfering noise sources and reverberation. First, a binaural front-end is used to detect the spatial positions of all active sound sources, resulting in a binary mask for each candidate...... on a support vector machine (SVM) classifier. A systematic analysis shows that the proposed algorithm is able to blindly determine the number and the corresponding spatial positions of speech sources in multisource scenarios and generalizes well to unknown acoustic conditions...

  14. Analysis of transient signals by Wavelet transform

    International Nuclear Information System (INIS)

    Penha, Rosani Libardi da; Silva, Aucyone A. da; Ting, Daniel K.S.; Oliveira Neto, Jose Messias de

    2000-01-01

    The objective of this work is to apply the Wavelet Transform in transient signals. The Wavelet technique can outline the short time events that are not easily detected using traditional techniques. In this work, the Wavelet Transform is compared with Fourier Transform, by using simulated data and rotor rig data. This data contain known transients. The wavelet could follow all the transients, what do not happen to the Fourier techniques. (author)

  15. Detection of seismic phases by wavelet transform. Dependence of its performance on wavelet functions; Wavelet henkan ni yoru jishinha no iso kenshutsu. Wavelet ni yoru sai

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, X; Yamazaki, K [Tokyo Gakugei University, Tokyo (Japan); Oguchi, Y [Hosei University, Tokyo (Japan)

    1997-10-22

    A study has been performed on wavelet analysis of seismic waves. In the wavelet analysis of seismic waves, there is a possibility that the results according to different wavelet functions may come out with great difference. The study has carried out the following analyses: an analysis of amplitude and phase using wavelet transform which uses wavelet function of Morlet on P- and S-waves generated by natural earthquakes and P-wave generated by an artificial earthquake, and an analysis using continuous wavelet transform, which uses a constitution of complex wavelet function constructed by a completely diagonal scaling function of Daubechies and the wavelet function. As a result, the following matters were made clear: the result of detection of abnormal components or discontinuity depends on the wavelet function; if the Morlet wavelet function is used to properly select angular frequency and scale, equiphase lines in a phase scalogram concentrate on the discontinuity; and the result of applying the complex wavelet function is superior to that of applying the wavelet function of Morlet. 2 refs., 5 figs.

  16. WAVELET TRANSFORM AND LIP MODEL

    Directory of Open Access Journals (Sweden)

    Guy Courbebaisse

    2011-05-01

    Full Text Available The Fourier transform is well suited to the study of stationary functions. Yet, it is superseded by the Wavelet transform for the powerful characterizations of function features such as singularities. On the other hand, the LIP (Logarithmic Image Processing model is a mathematical framework developed by Jourlin and Pinoli, dedicated to the representation and processing of gray tones images called hereafter logarithmic images. This mathematically well defined model, comprising a Fourier Transform "of its own", provides an effective tool for the representation of images obtained by transmitted light, such as microscope images. This paper presents a Wavelet transform within the LIP framework, with preservation of the classical Wavelet Transform properties. We show that the fast computation algorithm due to Mallat can be easily used. An application is given for the detection of crests.

  17. Direction-of-Arrival Estimation for Coherent Sources via Sparse Bayesian Learning

    Directory of Open Access Journals (Sweden)

    Zhang-Meng Liu

    2014-01-01

    Full Text Available A spatial filtering-based relevance vector machine (RVM is proposed in this paper to separate coherent sources and estimate their directions-of-arrival (DOA, with the filter parameters and DOA estimates initialized and refined via sparse Bayesian learning. The RVM is used to exploit the spatial sparsity of the incident signals and gain improved adaptability to much demanding scenarios, such as low signal-to-noise ratio (SNR, limited snapshots, and spatially adjacent sources, and the spatial filters are introduced to enhance global convergence of the original RVM in the case of coherent sources. The proposed method adapts to arbitrary array geometry, and simulation results show that it surpasses the existing methods in DOA estimation performance.

  18. Dual-tree complex wavelet for medical image watermarking

    International Nuclear Information System (INIS)

    Mavudila, K.R.; Ndaye, B.M.; Masmoudi, L.; Hassanain, N.; Cherkaoui, M.

    2010-01-01

    In order to transmit medical data between hospitals, we insert the information for each patient in the image and its diagnosis, the watermarking consist to insert a message in the image and try to find it with the maximum possible fidelity. This paper presents a blind watermarking scheme in wavelet transform domain dual tree (DTT), who increasing the robustness and preserves the image quality. This system is transparent to the user and allows image integrity control. In addition, it provides information on the location of potential alterations and an evaluation of image modifications which is of major importance in a medico-legal framework. An example using head magnetic resonance and mammography imaging illustrates the overall method. Wavelet techniques can be successfully applied in various image processing methods, namely in image de noising, segmentation, classification, watermarking and others. In this paper we discussed the application of dual tree complex wavelet transform (D T-CWT), which has significant advantages over classic discrete wavelet transform (DWT), for certain image processing problems. The D T-CWT is a form of discreet wavelet transform which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The main part of the paper is devoted to profit the exceptional quality for D T-CWT, compared to classical DWT, for a blind medical image watermarking, our schemes are using for the performance bivariate shrinkage with local variance estimation and are robust of attacks and favourably preserves the visual quality. Experimental results show that embedded watermarks using CWT give good image quality and are robust in comparison with the classical DWT.

  19. Fast reversible wavelet image compressor

    Science.gov (United States)

    Kim, HyungJun; Li, Ching-Chung

    1996-10-01

    We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.

  20. Fundamental papers in wavelet theory

    CERN Document Server

    Walnut, David F

    2006-01-01

    This book traces the prehistory and initial development of wavelet theory, a discipline that has had a profound impact on mathematics, physics, and engineering. Interchanges between these fields during the last fifteen years have led to a number of advances in applications such as image compression, turbulence, machine vision, radar, and earthquake prediction. This book contains the seminal papers that presented the ideas from which wavelet theory evolved, as well as those major papers that developed the theory into its current form. These papers originated in a variety of journals from differ

  1. A CMOS Morlet Wavelet Generator

    Directory of Open Access Journals (Sweden)

    A. I. Bautista-Castillo

    2017-04-01

    Full Text Available The design and characterization of a CMOS circuit for Morlet wavelet generation is introduced. With the proposed Morlet wavelet circuit, it is possible to reach a~low power consumption, improve standard deviation (σ control and also have a small form factor. A prototype in a double poly, three metal layers, 0.5 µm CMOS process from MOSIS foundry was carried out in order to verify the functionality of the proposal. However, the design methodology can be extended to different CMOS processes. According to the performance exhibited by the circuit, may be useful in many different signal processing tasks such as nonlinear time-variant systems.

  2. Estimation of microwave source location in precipitating electron fluxes according to Viking satellite data

    International Nuclear Information System (INIS)

    Khrushchinskij, A.A.; Ostapenko, A.A.; Gustafsson, G.; Eliasson, L.; Sandal, I.

    1989-01-01

    According to the Viking satellite data on electron fluxes in the 0.1-300 keV energy range, the microburst source location is estimated. On the basis of experimental delays in detected peaks in different energy channels and theoretical calculations of these delays within the dipole field model (L∼ 4-5.5), it is shown that the most probable source location is the equatorial region with the centre, 5-10 0 shifted towards the ionosphere

  3. Wavelet series approximation using wavelet function with compactly ...

    African Journals Online (AJOL)

    The Wavelets generated by Scaling Function with Compactly Support are useful in various applications especially for reconstruction of functions. Generally, the computational process will be faster if Scaling Function support descends, so computational errors are summarized from one level to another level. In this article, the ...

  4. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  5. Wavelets a tutorial in theory and applications

    CERN Document Server

    1992-01-01

    Wavelets: A Tutorial in Theory and Applications is the second volume in the new series WAVELET ANALYSIS AND ITS APPLICATIONS. As a companion to the first volume in this series, this volume covers several of the most important areas in wavelets, ranging from the development of the basic theory such as construction and analysis of wavelet bases to an introduction of some of the key applications, including Mallat's local wavelet maxima technique in second generation image coding. A fairly extensive bibliography is also included in this volume.Key Features* Covers several of the

  6. The use of multiwavelets for uncertainty estimation in seismic surface wave dispersion.

    Energy Technology Data Exchange (ETDEWEB)

    Poppeliers, Christian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    This report describes a new single-station analysis method to estimate the dispersion and uncer- tainty of seismic surface waves using the multiwavelet transform. Typically, when estimating the dispersion of a surface wave using only a single seismic station, the seismogram is decomposed into a series of narrow-band realizations using a bank of narrow-band filters. By then enveloping and normalizing the filtered seismograms and identifying the maximum power as a function of frequency, the group velocity can be estimated if the source-receiver distance is known. However, using the filter bank method, there is no robust way to estimate uncertainty. In this report, I in- troduce a new method of estimating the group velocity that includes an estimate of uncertainty. The method is similar to the conventional filter bank method, but uses a class of functions, called Slepian wavelets, to compute a series of wavelet transforms of the data. Each wavelet transform is mathematically similar to a filter bank, however, the time-frequency tradeoff is optimized. By taking multiple wavelet transforms, I form a population of dispersion estimates from which stan- dard statistical methods can be used to estimate uncertainty. I demonstrate the utility of this new method by applying it to synthetic data as well as ambient-noise surface-wave cross-correlelograms recorded by the University of Nevada Seismic Network.

  7. Use of multiple data sources to estimate hepatitis C seroprevalence among prisoners: A retrospective cohort study.

    Directory of Open Access Journals (Sweden)

    Kathryn J Snow

    Full Text Available Hepatitis C is a major cause of preventable morbidity and mortality. Prisoners are a key population for hepatitis C control programs, and with the advent of highly effective therapies, prisons are increasingly important sites for hepatitis C diagnosis and treatment. Accurate estimates of hepatitis C prevalence among prisoners are needed in order to plan and resource service provision, however many prevalence estimates are based on surveys compromised by limited and potentially biased participation. We aimed to compare estimates derived from three different data sources, and to assess whether the use of self-report as a supplementary data source may help researchers assess the risk of selection bias. We used three data sources to estimate the prevalence of hepatitis C antibodies in a large cohort of Australian prisoners-prison medical records, self-reported status during a face-to-face interview prior to release from prison, and data from a statewide notifiable conditions surveillance system. Of 1,315 participants, 33.8% had at least one indicator of hepatitis C seropositivity, however less than one third of these (9.5% of the entire cohort were identified by all three data sources. Among participants of known status, self-report had a sensitivity of 80.1% and a positive predictive value of 97.8%. Any one data source used in isolation would have under-estimated the prevalence of hepatitis C in this cohort. Using multiple data sources in studies of hepatitis C seroprevalence among prisoners may improve case detection and help researchers assess the risk of selection bias due to non-participation in serological testing.

  8. Haar wavelets, fluctuations and structure functions: convenient choices for geophysics

    Directory of Open Access Journals (Sweden)

    S. Lovejoy

    2012-09-01

    Full Text Available Geophysical processes are typically variable over huge ranges of space-time scales. This has lead to the development of many techniques for decomposing series and fields into fluctuations Δv at well-defined scales. Classically, one defines fluctuations as differences: (Δvdiff = v(xx-v(x and this is adequate for many applications (Δx is the "lag". However, if over a range one has scaling Δv ∝ ΔxH, these difference fluctuations are only adequate when 0 < H < 1. Hence, there is the need for other types of fluctuations. In particular, atmospheric processes in the "macroweather" range ≈10 days to 10–30 yr generally have −1 < H < 0, so that a definition valid over the range −1 < H < 1 would be very useful for atmospheric applications. A general framework for defining fluctuations is wavelets. However, the generality of wavelets often leads to fairly arbitrary choices of "mother wavelet" and the resulting wavelet coefficients may be difficult to interpret. In this paper we argue that a good choice is provided by the (historically first wavelet, the Haar wavelet (Haar, 1910, which is easy to interpret and – if needed – to generalize, yet has rarely been used in geophysics. It is also easy to implement numerically: the Haar fluctuation (ΔvHaar at lag Δx is simply equal to the difference of the mean from x to x+ Δx/2 and from xx/2 to xx. Indeed, we shall see that the interest of the Haar wavelet is this relation to the integrated process rather than its wavelet nature per se.

    Using numerical multifractal simulations, we show that it is quite accurate, and we compare and contrast it with another similar technique, detrended fluctuation analysis. We find that, for estimating scaling exponents, the two methods are very similar, yet

  9. Wavelet entropy characterization of elevated intracranial pressure.

    Science.gov (United States)

    Xu, Peng; Scalzo, Fabien; Bergsneider, Marvin; Vespa, Paul; Chad, Miller; Hu, Xiao

    2008-01-01

    Intracranial Hypertension (ICH) often occurs for those patients with traumatic brain injury (TBI), stroke, tumor, etc. Pathology of ICH is still controversial. In this work, we used wavelet entropy and relative wavelet entropy to study the difference existed between normal and hypertension states of ICP for the first time. The wavelet entropy revealed the similar findings as the approximation entropy that entropy during ICH state is smaller than that in normal state. Moreover, with wavelet entropy, we can see that ICH state has the more focused energy in the low wavelet frequency band (0-3.1 Hz) than the normal state. The relative wavelet entropy shows that the energy distribution in the wavelet bands between these two states is actually different. Based on these results, we suggest that ICH may be formed by the re-allocation of oscillation energy within brain.

  10. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    Science.gov (United States)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and

  11. Wavelet library for constrained devices

    Science.gov (United States)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  12. Visibility of wavelet quantization noise

    Science.gov (United States)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  13. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    Science.gov (United States)

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  14. Estimating sources, sinks and fluxes of reactive atmospheric compounds within a forest canopy

    Science.gov (United States)

    While few dispute the significance of within-canopy sources or sinks of reactive gaseous and particulate compounds, their estimation continues to be the subject of active research and debate. Reactive species undergo turbulent dispersion within an inhomogeneous flow field, and ma...

  15. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    Science.gov (United States)

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  16. Estimation of Methane Emissions from Municipal Solid Waste Landfills in China Based on Point Emission Sources

    Directory of Open Access Journals (Sweden)

    Cai Bo-Feng

    2014-01-01

    Citation: Cai, B.-F., Liu, J.-G., Gao, Q.-X., et al., 2014. Estimation of methane emissions from municipal solid waste landfills in China based on point emission sources. Adv. Clim. Change Res. 5(2, doi: 10.3724/SP.J.1248.2014.081.

  17. Estimating values for the moisture source load and buffering capacities from indoor climate measurements

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2008-01-01

    The objective of this study is to investigate the potential for estimating values for the total size of human induced moisture source load and the total buffering (moisture storage) capacity of the interior objects with the use of relatively simple measurements and the use of heat, air, and moisture

  18. Estimation of sediment sources using selected chemical tracers in the Perry lake basin, Kansas, USA

    Science.gov (United States)

    Juracek, K.E.; Ziegler, A.C.

    2009-01-01

    The ability to achieve meaningful decreases in sediment loads to reservoirs requires a determination of the relative importance of sediment sources within the contributing basins. In an investigation of sources of fine-grained sediment (clay and silt) within the Perry Lake Basin in northeast Kansas, representative samples of channel-bank sources, surface-soil sources (cropland and grassland), and reservoir bottom sediment were collected, chemically analyzed, and compared. The samples were sieved to isolate the phosphorus), organic and total carbon, 25 trace elements, and the radionuclide cesium-137 (137Cs). On the basis of substantial and consistent compositional differences among the source types, total nitrogen (TN), total phosphorus (TP), total organic carbon (TOC), and 137Cs were selected for use in the estimation of sediment sources. To further account for differences in particle-size composition between the sources and the reservoir bottom sediment, constituent ratio and clay-normalization techniques were used. Computed ratios included TOC to TN, TOC to TP, and TN to TP. Constituent concentrations (TN, TP, TOC) and activities (137Cs) were normalized by dividing by the percentage of clay. Thus, the sediment-source estimations involved the use of seven sediment-source indicators. Within the Perry Lake Basin, the consensus of the seven indicators was that both channel-bank and surface-soil sources were important in the Atchison County Lake and Banner Creek Reservoir subbasins, whereas channel-bank sources were dominant in the Mission Lake subbasin. On the sole basis of 137Cs activity, surface-soil sources contributed the most fine-grained sediment to Atchison County Lake, and channel-bank sources contributed the most fine-grained sediment to Banner Creek Reservoir and Mission Lake. Both the seven-indicator consensus and 137Cs indicated that channel-bank sources were dominant for Perry Lake and that channel-bank sources increased in importance with distance

  19. Analysis of the earthquake data and estimation of source parameters in the Kyungsang basin

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jeong-Moon; Lee, Jun-Hee [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-04-01

    The purpose of the present study is to determine the response spectrum for the Korean Peninsula and estimate the seismic source parameters and analyze and simulate the ground motion adequately from the seismic characteristics of Korean Peninsula and compare this with the real data. The estimated seismic source parameters such as apparent seismic stress drop is somewhat unstable because the data are insufficient. When the instrumental earthquake data were continuously accumulated in the future, the modification of these parameters may be developed. Although equations presented in this report are derived from the limited data, they can be utilized both in seismology and earthquake engineering. Finally, predictive equations may be given in terms of magnitude and hypocentral distances using these parameters. The estimation of the predictive equation constructed from the simulation is the object of further study. 34 refs., 27 figs., 10 tabs. (Author)

  20. Use of the spectral analysis for estimating the intensity of a weak periodic source

    International Nuclear Information System (INIS)

    Marseguerra, M.

    1989-01-01

    This paper deals with the possibility of exploiting spectral methods for the analysis of counting experiments in which one has to estimate the intensity of a weak periodic source of particles buried in a high background. The general theoretical expressions here obtained for the auto- and cross-spectra are applied to three kinds of simulated experiments. In all cases it turns out that the source intensity can acutally be estimated with a standard deviation comparable with that obtained in classical experiments in which the source can be moved out. Thus the spectral methods represent an interesting technique nowadays easy to implement on low-cost computers which could also be used in many research fields by suitably redesigning classical experiments. The convenience of using these methods in the field of nuclear safeguards is presently investigated in our Institute. (orig.)

  1. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  2. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  3. Traffic characterization and modeling of wavelet-based VBR encoded video

    Energy Technology Data Exchange (ETDEWEB)

    Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  4. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  5. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    Science.gov (United States)

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  6. Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy

    Science.gov (United States)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.

  7. Estimation of pollutant source contribution to the Pampanga River Basin using carbon and nitrogen isotopes

    International Nuclear Information System (INIS)

    Castaneda, Solidad S.; Sta Maria, Efren J.; Ramirez, Jennyvi D.; Collado, Mario B.; Samar, Edna D.

    2013-01-01

    This study assessed and estimated the percentage contribution of potential pollution sources in Pampanga River Basin using carbon and nitrogen isotopes as environmental tracers. The δ 13 C and δ 15 N values were determined in particulate organic matter, surface sediment, and plant tissue samples from point and non-point sources from several land use areas, namely domestic, croplands, livestock, fishery and forestry. Investigations were conducted in the wet and dry seasons (2012 and 2013). Some N sources do not have unique δ 15 N and there is overlapping among different N- sources type. δ 13 C data from the N sources provided an additional dimension which distinguished animal manure, human waste (septic and sewage), leaf litter, and synthetic fertilizer. Characterization of the non-point N-sources based on the isotopic fingerprints obtained from the point sources revealed that domestic, cropland, livestock, and fishery, influenced the isotopic composition of the materials but domestic and cropland land use provided the most significant influence. Livestock also contributed to a lesser extent. Isotope mixing model revealed that cropland sources generally contributed the most to pollutant loading during the wet season, from 22% to 98%, while domestic waste contributed higher in the dry season, from 55% to 65%. (author)

  8. A Wavelet-Based Optimization Method for Biofuel Production

    Directory of Open Access Journals (Sweden)

    Maurizio Carlini

    2018-02-01

    Full Text Available On a global scale many countries are still heavily dependent on crude oil to produce energy and fuel for transport, with a resulting increase of atmospheric pollution. A possible solution to obviate this problem is to find eco-sustainable energy sources. A potential choice could be the use of biodiesel as fuel. The work presented aims to characterise the transesterification reaction of waste peanut frying oil using colour analysis and wavelet analysis. The biodiesel production, with the complete absence of mucilages, was evaluated through a suitable set of energy wavelet coefficients and scalograms. The physical characteristics of the biodiesel are influenced by mucilages. In particular the viscosity, that is a fundamental parameter for the correct use of the biodiesel, might be compromised. The presence of contaminants in the samples can often be missed by visual analysis. The low and high frequency wavelet analysis, by investigating the energy change of wavelet coefficient, provided a valid characterisation of the quality of the samples, related to the absence of mucilages, which is consistent with the experimental results. The proposed method of this work represents a preliminary analysis, before the subsequent chemical physical analysis, that can be develop during the production phases of the biodiesel in order to optimise the process, avoiding the presence of impurities in suspension in the final product.

  9. The continental source of glyoxal estimated by the synergistic use of spaceborne measurements and inverse modelling

    Directory of Open Access Journals (Sweden)

    A. Richter

    2009-11-01

    Full Text Available Tropospheric glyoxal and formaldehyde columns retrieved from the SCIAMACHY satellite instrument in 2005 are used with the IMAGESv2 global chemistry-transport model and its adjoint in a two-compound inversion scheme designed to estimate the continental source of glyoxal. The formaldehyde observations provide an important constraint on the production of glyoxal from isoprene in the model, since the degradation of isoprene constitutes an important source of both glyoxal and formaldehyde. Current modelling studies underestimate largely the observed glyoxal satellite columns, pointing to the existence of an additional land glyoxal source of biogenic origin. We include an extra glyoxal source in the model and we explore its possible distribution and magnitude through two inversion experiments. In the first case, the additional source is represented as a direct glyoxal emission, and in the second, as a secondary formation through the oxidation of an unspecified glyoxal precursor. Besides this extra source, the inversion scheme optimizes the primary glyoxal and formaldehyde emissions, as well as their secondary production from other identified non-methane volatile organic precursors of anthropogenic, pyrogenic and biogenic origin.

    In the first inversion experiment, the additional direct source, estimated at 36 Tg/yr, represents 38% of the global continental source, whereas the contribution of isoprene is equally important (30%, the remainder being accounted for by anthropogenic (20% and pyrogenic fluxes. The inversion succeeds in reducing the underestimation of the glyoxal columns by the model, but it leads to a severe overestimation of glyoxal surface concentrations in comparison with in situ measurements. In the second scenario, the inferred total global continental glyoxal source is estimated at 108 Tg/yr, almost two times higher than the global a priori source. The extra secondary source is the largest contribution to the global glyoxal

  10. Reassessment of the technical bases for estimating source terms. Draft report for comment

    International Nuclear Information System (INIS)

    Silberberg, M.; Mitchell, J.A.; Meyer, R.O.; Pasedag, W.F.; Ryder, C.P.; Peabody, C.A.; Jankowski, M.W.

    1985-07-01

    NUREG-0956 describes the NRC staff and contractor efforts to reassess and update the agency's analytical procedures for estimating accident source terms for nuclear power plants. The effort included development of a new source term analytical procedure - a set of computer codes - that is intended to replace the methodology of the Reactor Safety Study (WASH-1400) and to be used in reassessing the use of TID-14844 assumptions (10 CFR 100). NUREG-0956 describes the development of these codes, the demonstration of the codes to calculate source terms for specific cases, the peer review of this work, some perspectives on the overall impact of new source terms on plant risks, the plans for related research projects, and the conclusions and recommendations resulting from the effort

  11. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  12. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  13. Estimating uncertainty in subsurface glider position using transmissions from fixed acoustic tomography sources.

    Science.gov (United States)

    Van Uffelen, Lora J; Nosal, Eva-Marie; Howe, Bruce M; Carter, Glenn S; Worcester, Peter F; Dzieciuch, Matthew A; Heaney, Kevin D; Campbell, Richard L; Cross, Patrick S

    2013-10-01

    Four acoustic Seagliders were deployed in the Philippine Sea November 2010 to April 2011 in the vicinity of an acoustic tomography array. The gliders recorded over 2000 broadband transmissions at ranges up to 700 km from moored acoustic sources as they transited between mooring sites. The precision of glider positioning at the time of acoustic reception is important to resolve the fundamental ambiguity between position and sound speed. The Seagliders utilized GPS at the surface and a kinematic model below for positioning. The gliders were typically underwater for about 6.4 h, diving to depths of 1000 m and traveling on average 3.6 km during a dive. Measured acoustic arrival peaks were unambiguously associated with predicted ray arrivals. Statistics of travel-time offsets between received arrivals and acoustic predictions were used to estimate range uncertainty. Range (travel time) uncertainty between the source and the glider position from the kinematic model is estimated to be 639 m (426 ms) rms. Least-squares solutions for glider position estimated from acoustically derived ranges from 5 sources differed by 914 m rms from modeled positions, with estimated uncertainty of 106 m rms in horizontal position. Error analysis included 70 ms rms of uncertainty due to oceanic sound-speed variability.

  14. SymPS: BRDF Symmetry Guided Photometric Stereo for Shape and Light Source Estimation.

    Science.gov (United States)

    Lu, Feng; Chen, Xiaowu; Sato, Imari; Sato, Yoichi

    2018-01-01

    We propose uncalibrated photometric stereo methods that address the problem due to unknown isotropic reflectance. At the core of our methods is the notion of "constrained half-vector symmetry" for general isotropic BRDFs. We show that such symmetry can be observed in various real-world materials, and it leads to new techniques for shape and light source estimation. Based on the 1D and 2D representations of the symmetry, we propose two methods for surface normal estimation; one focuses on accurate elevation angle recovery for surface normals when the light sources only cover the visible hemisphere, and the other for comprehensive surface normal optimization in the case that the light sources are also non-uniformly distributed. The proposed robust light source estimation method also plays an essential role to let our methods work in an uncalibrated manner with good accuracy. Quantitative evaluations are conducted with both synthetic and real-world scenes, which produce the state-of-the-art accuracy for all of the non-Lambertian materials in MERL database and the real-world datasets.

  15. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.

  16. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  17. Estimation of Sputtering Damages on a Magnetron H- Ion Source Induced by Cs+ and H+ Ions

    CERN Document Server

    Pereira, H; Alessi, J; Kalvas, t

    2013-01-01

    An H− ion source is being developed for CERN’s Linac4 accelerator. A beam current requirement of 80 mA and a reliability above 99% during 1 year with 3 month uninterrupted operation periods are mandatory. To design a low-maintenance long life-time source, it is important to investigate and understand the wear mechanisms. A cesiated plasma discharge ion source, such as the BNL magnetron source, is a good candidate for the Linac4 ion source. However, in the magnetron source operated at BNL, the removal of material from the molybdenum cathode and the stainless steel anode cover plate surfaces is visible after extended operation periods. The observed sputtering traces are shown to result from cesium vapors and hydrogen gas ionized in the extraction region and subsequently accelerated by the extraction field. This paper presents a quantitative estimate of the ionization of cesium and hydrogen by the electron and H− beams in the extraction region of BNL’s magnetron ion source. The respective contributions o...

  18. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    Science.gov (United States)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  19. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...

  20. Workflow for near-surface velocity automatic estimation: Source-domain full-traveltime inversion followed by waveform inversion

    KAUST Repository

    Liu, Lu; Fei, Tong; Luo, Yi; Guo, Bowen

    2017-01-01

    This paper presents a workflow for near-surface velocity automatic estimation using the early arrivals of seismic data. This workflow comprises two methods, source-domain full traveltime inversion (FTI) and early-arrival waveform inversion. Source

  1. Wavelet analysis of the seismograms for tsunami warning

    Directory of Open Access Journals (Sweden)

    A. Chamoli

    2010-10-01

    Full Text Available The complexity in the tsunami phenomenon makes the available warning systems not much effective in the practical situations. The problem arises due to the time lapsed in the data transfer, processing and modeling. The modeling and simulation needs the input fault geometry and mechanism of the earthquake. The estimation of these parameters and other aprior information increases the utilized time for making any warning. Here, the wavelet analysis is used to identify the tsunamigenesis of an earthquake. The frequency content of the seismogram in time scale domain is examined using wavelet transform. The energy content in high frequencies is calculated and gives a threshold for tsunami warnings. Only first few minutes of the seismograms of the earthquake events are used for quick estimation. The results for the earthquake events of Andaman Sumatra region and other historic events are promising.

  2. Visualization of a Turbulent Jet Using Wavelets

    Institute of Scientific and Technical Information of China (English)

    Hui LI

    2001-01-01

    An application of multiresolution image analysis to turbulence was investigated in this paper, in order to visualize the coherent structure and the most essential scales governing turbulence. The digital imaging photograph of jet slice was decomposed by two-dimensional discrete wavelet transform based on Daubechies, Coifman and Baylkin bases. The best choice of orthogonal wavelet basis for analyzing the image of the turbulent structures was first discussed. It is found that these orthonormal wavelet families with index N<10 were inappropriate for multiresolution image analysis of turbulent flow. The multiresolution images of turbulent structures were very similar when using the wavelet basis with the higher index number, even though wavelet bases are different functions. From the image components in orthogonal wavelet spaces with different scales, the further evident of the multi-scale structures in jet can be observed, and the edges of the vortices at different resolutions or scales and the coherent structure can be easily extracted.

  3. Modeling Network Traffic in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Sheng Ma

    2004-12-01

    Full Text Available This work discovers that although network traffic has the complicated short- and long-range temporal dependence, the corresponding wavelet coefficients are no longer long-range dependent. Therefore, a "short-range" dependent process can be used to model network traffic in the wavelet domain. Both independent and Markov models are investigated. Theoretical analysis shows that the independent wavelet model is sufficiently accurate in terms of the buffer overflow probability for Fractional Gaussian Noise traffic. Any model, which captures additional correlations in the wavelet domain, only improves the performance marginally. The independent wavelet model is then used as a unified approach to model network traffic including VBR MPEG video and Ethernet data. The computational complexity is O(N for developing such wavelet models and generating synthesized traffic of length N, which is among the lowest attained.

  4. Cross wavelet analysis: significance testing and pitfalls

    Directory of Open Access Journals (Sweden)

    D. Maraun

    2004-01-01

    Full Text Available In this paper, we present a detailed evaluation of cross wavelet analysis of bivariate time series. We develop a statistical test for zero wavelet coherency based on Monte Carlo simulations. If at least one of the two processes considered is Gaussian white noise, an approximative formula for the critical value can be utilized. In a second part, typical pitfalls of wavelet cross spectra and wavelet coherency are discussed. The wavelet cross spectrum appears to be not suitable for significance testing the interrelation between two processes. Instead, one should rather apply wavelet coherency. Furthermore we investigate problems due to multiple testing. Based on these results, we show that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995. However, during a distinct period from around 1920 to 1940, significant coherency between the two phenomena occurs.

  5. Logic Estimation of the Optimum Source Neutron Energy for BNCT of Brain Tumors

    International Nuclear Information System (INIS)

    Dorrah, M.A.; Gaber, F.A.; Abd Elwahab, M.A.; Kotb, M.A.; Mohammed, M.M.

    2012-01-01

    BNCT is very complicated technique; primarily due to the complexity of element composition of the brain. Moreover; numerous components contributes to the over all radiation dose both to normal brain and to tumor. Simple algebraic summation cannot be applied to these dose components, since each component should at first be weighed by its relative biological effectiveness (RBE) value. Unfortunately, there is no worldwide agreement on these RBE values. For that reason, the parameters required for accurate planning of BNCT of brain tumors located at different depths in brain remained obscure. The most important of these parameters is; the source neutron energy. Thermal neutrons were formerly employed for BNCT, but they failed to prove therapeutic efficacy. Later on; epithermal neutrons were suggested proposing that they would be enough thermalized while transporting in the brain tissues. However; debate aroused regarding the source neutrons energy appropriate for treating brain tumors located at different depths in brain. Again, the insufficient knowledge regarding the RBE values of the different dose components was a major obstacle. A new concept was adopted for estimating the optimum source neutrons energy appropriate for different circumstances of BNCT. Four postulations on the optimum source neutrons energy were worked out, almost entirely independent of the RBE values of the different dose components. Four corresponding condition on the optimum source neutrons energy were deduced. An energy escalation study was carried out investigating 65 different source neutron energies, between 0.01 eV and 13.2 MeV. MCNP4B Monte C arlo neutron transport code was utilized to study the behavior of neutrons in the brain. The deduced four conditions were applied to the results of the 65 steps of the neutron energy escalation study. A source neutron energy range of few electron volts (eV) to about 30 keV was estimated to be the most appropriate for BNCT of brain tumors located at

  6. Multidimensional signaling via wavelet packets

    Science.gov (United States)

    Lindsey, Alan R.

    1995-04-01

    This work presents a generalized signaling strategy for orthogonally multiplexed communication. Wavelet packet modulation (WPM) employs the basis functions from an arbitrary pruning of a full dyadic tree structured filter bank as orthogonal pulse shapes for conventional QAM symbols. The multi-scale modulation (MSM) and M-band wavelet modulation (MWM) schemes which have been recently introduced are handled as special cases, with the added benefit of an entire library of potentially superior sets of basis functions. The figures of merit are derived and it is shown that the power spectral density is equivalent to that for QAM (in fact, QAM is another special case) and hence directly applicable in existing systems employing this standard modulation. Two key advantages of this method are increased flexibility in time-frequency partitioning and an efficient all-digital filter bank implementation, making the WPM scheme more robust to a larger set of interferences (both temporal and sinusoidal) and computationally attractive as well.

  7. Wavelet analysis of epileptic spikes

    Science.gov (United States)

    Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-05-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  8. Wavelet analysis of epileptic spikes

    CERN Document Server

    Latka, M; Kozik, A; West, B J; Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-01-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous, pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  9. A New Perceptual Mapping Model Using Lifting Wavelet Transform

    OpenAIRE

    Taha TahaBasheer; Ehkan Phaklen; Ngadiran Ruzelita

    2017-01-01

    Perceptual mappingapproaches have been widely used in visual information processing in multimedia and internet of things (IOT) applications. Accumulative Lifting Difference (ALD) is proposed in this paper as texture mapping model based on low-complexity lifting wavelet transform, and combined with luminance masking for creating an efficient perceptual mapping model to estimate Just Noticeable Distortion (JND) in digital images. In addition to low complexity operations, experiments results sho...

  10. Wavelet Analysis for Molecular Dynamics

    Science.gov (United States)

    2015-06-01

    Our method takes as input the topology and sparsity of the bonding structure of a molecular system, and returns a hierarchical set of system-specific...problems, such as modeling crack initiation and propagation, or interfacial phenomena. In the present work, we introduce a wavelet-based approach to extend...Several functional forms are common for angle poten- tials complicating not only implementation but also choice of approximation. In all cases, the

  11. Wavelet analysis in two-dimensional tomography

    Science.gov (United States)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  12. Wavelet Radiosity on Arbitrary Planar Surfaces

    OpenAIRE

    Holzschuch , Nicolas; Cuny , François; Alonso , Laurent

    2000-01-01

    Colloque avec actes et comité de lecture. internationale.; International audience; Wavelet radiosity is, by its nature, restricted to parallelograms or triangles. This paper presents an innovative technique enabling wavelet radiosity computations on planar surfaces of arbitrary shape, including concave contours or contours with holes. This technique replaces the need for triangulating such complicated shapes, greatly reducing the complexity of the wavelet radiosity algorithm and the computati...

  13. Antenatal surveillance through estimates of the sources underlying the abdominal phonogram: a preliminary study

    International Nuclear Information System (INIS)

    Jiménez-González, A; James, C J

    2013-01-01

    Today, it is generally accepted that current methods for biophysical antenatal surveillance do not facilitate a comprehensive and reliable assessment of foetal well-being and that continuing research into alternative methods is necessary to improve antenatal monitoring procedures. In our research, attention has been paid to the abdominal phonogram, a signal that is recorded by positioning an acoustic sensor on the maternal womb and contains valuable information about foetal status, but which is hidden by maternal and environmental sources. To recover such information, previous work has used single-channel independent component analysis (SCICA) on the abdominal phonogram and successfully retrieved estimates of the foetal phonocardiogram, the maternal phonocardiogram, the maternal respirogram and noise. The availability of these estimates made it possible for the current study to focus on their evaluation as sources for antenatal surveillance purposes. To this end, the foetal heart rate (FHR), the foetal heart sounds morphology, the maternal heart rate (MHR) and the maternal breathing rate (MBR) were collected from the estimates retrieved from a dataset of 25 abdominal phonograms. Next, these parameters were compared with reference values to quantify the significance of the physiological information extracted from the estimates. As a result, it has been seen that the instantaneous FHR, the instantaneous MHR and the MBR collected from the estimates consistently followed the trends given by the reference signals, which is a promising outcome for this preliminary study. Thus, as far as this study has gone, it can be said that the independent traces retrieved by SCICA from the abdominal phonogram are likely to become valuable sources of information for well-being surveillance, both foetal and maternal. (paper)

  14. Earthquake source scaling and self-similarity estimation from stacking P and S spectra

    Science.gov (United States)

    Prieto, GermáN. A.; Shearer, Peter M.; Vernon, Frank L.; Kilb, Debi

    2004-08-01

    We study the scaling relationships of source parameters and the self-similarity of earthquake spectra by analyzing a cluster of over 400 small earthquakes (ML = 0.5 to 3.4) recorded by the Anza seismic network in southern California. We compute P, S, and preevent noise spectra from each seismogram using a multitaper technique and approximate source and receiver terms by iteratively stacking the spectra. To estimate scaling relationships, we average the spectra in size bins based on their relative moment. We correct for attenuation by using the smallest moment bin as an empirical Green's function (EGF) for the stacked spectra in the larger moment bins. The shapes of the log spectra agree within their estimated uncertainties after shifting along the ω-3 line expected for self-similarity of the source spectra. We also estimate corner frequencies and radiated energy from the relative source spectra using a simple source model. The ratio between radiated seismic energy and seismic moment (proportional to apparent stress) is nearly constant with increasing moment over the magnitude range of our EGF-corrected data (ML = 1.8 to 3.4). Corner frequencies vary inversely as the cube root of moment, as expected from the observed self-similarity in the spectra. The ratio between P and S corner frequencies is observed to be 1.6 ± 0.2. We obtain values for absolute moment and energy by calibrating our results to local magnitudes for these earthquakes. This yields a S to P energy ratio of 9 ± 1.5 and a value of apparent stress of about 1 MPa.

  15. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    Science.gov (United States)

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  16. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  17. Discrete wavelet transform analysis of surface electromyography for the fatigue assessment of neck and shoulder muscles.

    Science.gov (United States)

    Chowdhury, Suman Kanti; Nimbarte, Ashish D; Jaridi, Majid; Creese, Robert C

    2013-10-01

    Assessment of neuromuscular fatigue is essential for early detection and prevention of risks associated with work-related musculoskeletal disorders. In recent years, discrete wavelet transform (DWT) of surface electromyography (SEMG) has been used to evaluate muscle fatigue, especially during dynamic contractions when the SEMG signal is non-stationary. However, its application to the assessment of work-related neck and shoulder muscle fatigue is not well established. Therefore, the purpose of this study was to establish DWT analysis as a suitable method to conduct quantitative assessment of neck and shoulder muscle fatigue under dynamic repetitive conditions. Ten human participants performed 40min of fatiguing repetitive arm and neck exertions while SEMG data from the upper trapezius and sternocleidomastoid muscles were recorded. The ten of the most commonly used wavelet functions were used to conduct the DWT analysis. Spectral changes estimated using power of wavelet coefficients in the 12-23Hz frequency band showed the highest sensitivity to fatigue induced by the dynamic repetitive exertions. Although most of the wavelet functions tested in this study reasonably demonstrated the expected power trend with fatigue development and recovery, the overall performance of the "Rbio3.1" wavelet in terms of power estimation and statistical significance was better than the remaining nine wavelets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Nuclear reaction models - source term estimation for safety design in accelerators

    International Nuclear Information System (INIS)

    Nandy, Maitreyee

    2013-01-01

    Accelerator driven subcritical system (ADSS) employs proton induced spallation reaction at a few GeV. Safety design of these systems involves source term estimation in two steps - multiple fragmentation of the target and n+γ emission through a fast process followed by statistical decay of the primary fragments. The prompt radiation field is estimated in the framework of quantum molecular dynamics (QMD) theory, intra-nuclear cascade or Monte Carlo calculations. A few nuclear reaction model codes used for this purpose are QMD, JQMD, Bertini, INCL4, PHITS, followed by statistical decay codes like ABLA, GEM, GEMINI, etc. In the case of electron accelerators photons and photoneutrons dominate the prompt radiation field. High energy photon yield through Bremsstrahlung is estimated in the framework of Born approximation while photoneutron production is calculated using giant dipole resonance and quasi-deuteron formation cross section. In this talk hybrid and exciton PEQ models and QMD formalism will be discussed briefly

  19. Estimates of Imaging Times for Conventional and Synchrotron X-Ray Sources

    CERN Document Server

    Kinney, J

    2003-01-01

    The following notes are to be taken as estimates of the time requirements for imaging NIF targets in three-dimensions with absorption contrast. The estimates ignore target geometry and detector inefficiency, and focus only on the statistical question of detecting compositional (structural) differences between adjacent volume elements in the presence of noise. The basic equations, from the classic reference by Grodzins, consider imaging times in terms of the required number of photons necessary to provide an image with given resolution and noise. The time estimates, therefore, have been based on the calculated x-ray fluxes from the proposed Advanced Light Source (ALS) imaging beamline, and from the calculated flux for a tungsten anode x-ray generator operated in a point focus mode.

  20. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  1. Wavelet analysis and its applications an introduction

    CERN Document Server

    Yajnik, Archit

    2013-01-01

    "Wavelet analysis and its applications: an introduction" demonstrates the consequences of Fourier analysis and introduces the concept of wavelet followed by applications lucidly. While dealing with one dimension signals, sometimes they are required to be oversampled. A novel technique of oversampling the digital signal is introduced in this book alongwith necessary illustrations. The technique of feature extraction in the development of optical character recognition software for any natural language alongwith wavelet based feature extraction technique is demonstrated using multiresolution analysis of wavelet in the book.

  2. Wavelets for Sparse Representation of Music

    DEFF Research Database (Denmark)

    Endelt, Line Ørtoft; Harbo, Anders La-Cour

    2004-01-01

    We are interested in obtaining a sparse representation of music signals by means of a discrete wavelet transform (DWT). That means we want the energy in the representation to be concentrated in few DWT coefficients. It is well-known that the decay of the DWT coefficients is strongly related...... to the number of vanishing moments of the mother wavelet, and to the smoothness of the signal. In this paper we present the result of applying two classical families of wavelets to a series of musical signals. The purpose is to determine a general relation between the number of vanishing moments of the wavelet...

  3. Wavelet-based prediction of oil prices

    International Nuclear Information System (INIS)

    Yousefi, Shahriar; Weinreich, Ilona; Reinarz, Dominik

    2005-01-01

    This paper illustrates an application of wavelets as a possible vehicle for investigating the issue of market efficiency in futures markets for oil. The paper provides a short introduction to the wavelets and a few interesting wavelet-based contributions in economics and finance are briefly reviewed. A wavelet-based prediction procedure is introduced and market data on crude oil is used to provide forecasts over different forecasting horizons. The results are compared with data from futures markets for oil and the relative performance of this procedure is used to investigate whether futures markets are efficiently priced

  4. Simultaneous identification of unknown groundwater pollution sources and estimation of aquifer parameters

    Science.gov (United States)

    Datta, Bithin; Chakrabarty, Dibakar; Dhar, Anirban

    2009-09-01

    Pollution source identification is a common problem encountered frequently. In absence of prior information about flow and transport parameters, the performance of source identification models depends on the accuracy in estimation of these parameters. A methodology is developed for simultaneous pollution source identification and parameter estimation in groundwater systems. The groundwater flow and transport simulator is linked to the nonlinear optimization model as an external module. The simulator defines the flow and transport processes, and serves as a binding equality constraint. The Jacobian matrix which determines the search direction in the nonlinear optimization model links the groundwater flow-transport simulator and the optimization method. Performance of the proposed methodology using spatiotemporal hydraulic head values and pollutant concentration measurements is evaluated by solving illustrative problems. Two different decision model formulations are developed. The computational efficiency of these models is compared using two nonlinear optimization algorithms. The proposed methodology addresses some of the computational limitations of using the embedded optimization technique which embeds the discretized flow and transport equations as equality constraints for optimization. Solution results obtained are also found to be better than those obtained using the embedded optimization technique. The performance evaluations reported here demonstrate the potential applicability of the developed methodology for a fairly large aquifer study area with multiple unknown pollution sources.

  5. Variability in estimated runoff in a forested area based on different cartographic data sources

    Energy Technology Data Exchange (ETDEWEB)

    Fragoso, L.; Quirós, E.; Durán-Barroso, P.

    2017-11-01

    Aim of study: The goal of this study is to analyse variations in curve number (CN) values produced by different cartographic data sources in a forested watershed, and determine which of them best fit with measured runoff volumes. Area of study: A forested watershed located in western Spain. Material and methods: Four digital cartographic data sources were used to determine the runoff CN in the watershed. Main results: None of the cartographic sources provided all the information necessary to determine properly the CN values. Our proposed methodology, focused on the tree canopy cover, improves the achieved results. Research highlights: The estimation of the CN value in forested areas should be attained as a function of tree canopy cover and new calibrated tables should be implemented in a local scale.

  6. Resource communication: Variability in estimated runoff in a forested area based on different cartographic data sources

    Directory of Open Access Journals (Sweden)

    Laura Fragoso

    2017-10-01

    Full Text Available Aim of study: The goal of this study is to analyse variations in curve number (CN values produced by different cartographic data sources in a forested watershed, and determine which of them best fit with measured runoff volumes. Area of study: A forested watershed located in western Spain. Material and methods: Four digital cartographic data sources were used to determine the runoff CN in the watershed. Main results: None of the cartographic sources provided all the information necessary to determine properly the CN values. Our proposed methodology, focused on the tree canopy cover, improves the achieved results. Research highlights: The estimation of the CN value in forested areas should be attained as a function of tree canopy cover and new calibrated tables should be implemented in a local scale.

  7. Added Value of uncertainty Estimates of SOurce term and Meteorology (AVESOME)

    DEFF Research Database (Denmark)

    Sørensen, Jens Havskov; Schönfeldt, Fredrik; Sigg, Robert

    In the early phase of a nuclear accident, two large sources of uncertainty exist: one related to the source term and one associated with the meteorological data. Operational methods are being developed in AVESOME for quantitative estimation of uncertainties in atmospheric dispersion prediction.......g. at national meteorological services, the proposed methodology is feasible for real-time use, thereby adding value to decision support. In the recent NKS-B projects MUD, FAUNA and MESO, the implications of meteorological uncertainties for nuclear emergency preparedness and management have been studied...... uncertainty in atmospheric dispersion model forecasting stemming from both the source term and the meteorological data is examined. Ways to implement the uncertainties of forecasting in DSSs, and the impacts on real-time emergency management are described. The proposed methodology allows for efficient real...

  8. Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain

    OpenAIRE

    Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D.

    2013-01-01

    In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture th...

  9. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  10. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  11. Complex Wavelet transform for MRI

    International Nuclear Information System (INIS)

    Junor, P.; Janney, P.

    2004-01-01

    Full text: There is a perpetual compromise encountered in magnetic resonance (MRl) image reconstruction, between the traditional elements of image quality (noise, spatial resolution and contrast). Additional factors exacerbating this trade-off include various artifacts, computational (and hence time-dependent) overhead, and financial expense. This paper outlines a new approach to the problem of minimizing MRI image acquisition and reconstruction time without compromising resolution and noise reduction. The standard approaches for reconstructing magnetic resonance (MRI) images from raw data (which rely on relatively conventional signal processing) have matured but there are a number of challenges which limit their use. A major one is the 'intrinsic' signal-to-noise ratio (SNR) of the reconstructed image that depends on the strength of the main field. A typical clinical MRI almost invariably uses a super-cooled magnet in order to achieve a high field strength. The ongoing running cost of these super-cooled magnets prompts consideration of alternative magnet systems for use in MRIs for developing countries and in some remote regional installations. The decrease in image quality from using lower field strength magnets can be addressed by improvements in signal processing strategies. Conversely, improved signal processing will obviously benefit the current conventional field strength MRI machines. Moreover, the 'waiting time' experienced in many MR sequences (due to the relaxation time delays) can be exploited by more rigorous processing of the MR signals. Acquisition often needs to be repeated so that coherent averaging may partially redress the shortfall in SNR, at the expense of further delay. Wavelet transforms have been used in MRI as an alternative for encoding and denoising for over a decade. These have not supplanted the traditional Fourier transform methods that have long been the mainstay of MRI reconstruction, but have some inflexibility. The dual

  12. Estimating average alcohol consumption in the population using multiple sources: the case of Spain.

    Science.gov (United States)

    Sordo, Luis; Barrio, Gregorio; Bravo, María J; Villalbí, Joan R; Espelt, Albert; Neira, Montserrat; Regidor, Enrique

    2016-01-01

    National estimates on per capita alcohol consumption are provided regularly by various sources and may have validity problems, so corrections are needed for monitoring and assessment purposes. Our objectives were to compare different alcohol availability estimates for Spain, to build the best estimate (actual consumption), characterize its time trend during 2001-2011, and quantify the extent to which other estimates (coverage) approximated actual consumption. Estimates were: alcohol availability from the Spanish Tax Agency (Tax Agency availability), World Health Organization (WHO availability) and other international agencies, self-reported purchases from the Spanish Food Consumption Panel, and self-reported consumption from population surveys. Analyses included calculating: between-agency discrepancy in availability, multisource availability (correcting Tax Agency availability by underestimation of wine and cider), actual consumption (adjusting multisource availability by unrecorded alcohol consumption/purchases and alcohol losses), and coverage of selected estimates. Sensitivity analyses were undertaken. Time trends were characterized by joinpoint regression. Between-agency discrepancy in alcohol availability remained high in 2011, mainly because of wine and spirits, although some decrease was observed during the study period. The actual consumption was 9.5 l of pure alcohol/person-year in 2011, decreasing 2.3 % annually, mainly due to wine and spirits. 2011 coverage of WHO availability, Tax Agency availability, self-reported purchases, and self-reported consumption was 99.5, 99.5, 66.3, and 28.0 %, respectively, generally with downward trends (last three estimates, especially self-reported consumption). The multisource availability overestimated actual consumption by 12.3 %, mainly due to tourism imbalance. Spanish estimates of per capita alcohol consumption show considerable weaknesses. Using uncorrected estimates, especially self-reported consumption, for

  13. Use of Multiple Data Sources to Estimate the Economic Cost of Dengue Illness in Malaysia

    Science.gov (United States)

    Shepard, Donald S.; Undurraga, Eduardo A.; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan

    2012-01-01

    Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue. PMID:23033404

  14. An open source framework for tracking and state estimation ('Stone Soup')

    Science.gov (United States)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  15. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    Science.gov (United States)

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  16. Assembling GHERG: Could "academic crowd-sourcing" address gaps in global health estimates?

    Science.gov (United States)

    Rudan, Igor; Campbell, Harry; Marušić, Ana; Sridhar, Devi; Nair, Harish; Adeloye, Davies; Theodoratou, Evropi; Chan, Kit Yee

    2015-06-01

    In recent months, the World Health Organization (WHO), independent academic researchers, the Lancet and PLoS Medicine journals worked together to improve reporting of population health estimates. The new guidelines for accurate and transparent health estimates reporting (likely to be named GATHER), which are eagerly awaited, represent a helpful move that should benefit the field of global health metrics. Building on this progress and drawing from a tradition of Child Health Epidemiology Reference Group (CHERG)'s successful work model, we would like to propose a new initiative - "Global Health Epidemiology Reference Group" (GHERG). We see GHERG as an informal and entirely voluntary international collaboration of academic groups who are willing to contribute to improving disease burden estimates and respect the principles of the new guidelines - a form of "academic crowd-sourcing". The main focus of GHERG will be to identify the "gap areas" where not much information is available and/or where there is a lot of uncertainty present about the accuracy of the existing estimates. This approach should serve to complement the existing WHO and IHME estimates and to represent added value to both efforts.

  17. Use of multiple data sources to estimate the economic cost of dengue illness in Malaysia.

    Science.gov (United States)

    Shepard, Donald S; Undurraga, Eduardo A; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan

    2012-11-01

    Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue.

  18. Open-Source Python Modules to Estimate Level Ice Thickness from Ice Charts

    Science.gov (United States)

    Geiger, C. A.; Deliberty, T. L.; Bernstein, E. R.; Helfrich, S.

    2012-12-01

    A collaborative research effort between the University of Delaware (UD) and National Ice Center (NIC) addresses the task of providing open-source translations of sea ice stage-of-development into level ice thickness estimates on a 4km grid for the Interactive Multisensor Snow and Ice Mapping System (IMS). The characteristics for stage-of-development are quantified from remote sensing imagery with estimates of level ice thickness categories originating from World Meteorological Organization (WMO) egg coded ice charts codified since the 1970s. Conversions utilize Python scripting modules which transform electronic ice charts with WMO egg code characteristics into five level ice thickness categories, in centimeters, (0-10, 10-30, 30-70, 70-120, >120cm) and five ice types (open water, first year pack ice, fast ice, multiyear ice, and glacial ice with a reserve slot for deformed ice fractions). Both level ice thickness categories and ice concentration fractions are reported with uncertainties propagated based on WMO ice stage ranges which serve as proxy estimates for standard deviation. These products are in preparation for use by NCEP, CMC, and NAVO by 2014 based on their modeling requirements for daily products in near-real time. In addition to development, continuing research tests the value of these estimated products against in situ observations to improve both value and uncertainty estimates.

  19. Sediment delivery estimates in water quality models altered by resolution and source of topographic data.

    Science.gov (United States)

    Beeson, Peter C; Sadeghi, Ali M; Lang, Megan W; Tomer, Mark D; Daughtry, Craig S T

    2014-01-01

    Moderate-resolution (30-m) digital elevation models (DEMs) are normally used to estimate slope for the parameterization of non-point source, process-based water quality models. These models, such as the Soil and Water Assessment Tool (SWAT), use the Universal Soil Loss Equation (USLE) and Modified USLE to estimate sediment loss. The slope length and steepness factor, a critical parameter in USLE, significantly affects sediment loss estimates. Depending on slope range, a twofold difference in slope estimation potentially results in as little as 50% change or as much as 250% change in the LS factor and subsequent sediment estimation. Recently, the availability of much finer-resolution (∼3 m) DEMs derived from Light Detection and Ranging (LiDAR) data has increased. However, the use of these data may not always be appropriate because slope values derived from fine spatial resolution DEMs are usually significantly higher than slopes derived from coarser DEMs. This increased slope results in considerable variability in modeled sediment output. This paper addresses the implications of parameterizing models using slope values calculated from DEMs with different spatial resolutions (90, 30, 10, and 3 m) and sources. Overall, we observed over a 2.5-fold increase in slope when using a 3-m instead of a 90-m DEM, which increased modeled soil loss using the USLE calculation by 130%. Care should be taken when using LiDAR-derived DEMs to parameterize water quality models because doing so can result in significantly higher slopes, which considerably alter modeled sediment loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Nonlinear estimation-based dipole source localization for artificial lateral line systems

    International Nuclear Information System (INIS)

    Abdulsadda, Ahmad T; Tan Xiaobo

    2013-01-01

    As a flow-sensing organ, the lateral line system plays an important role in various behaviors of fish. An engineering equivalent of a biological lateral line is of great interest to the navigation and control of underwater robots and vehicles. A vibrating sphere, also known as a dipole source, can emulate the rhythmic movement of fins and body appendages, and has been widely used as a stimulus in the study of biological lateral lines. Dipole source localization has also become a benchmark problem in the development of artificial lateral lines. In this paper we present two novel iterative schemes, referred to as Gauss–Newton (GN) and Newton–Raphson (NR) algorithms, for simultaneously localizing a dipole source and estimating its vibration amplitude and orientation, based on the analytical model for a dipole-generated flow field. The performance of the GN and NR methods is first confirmed with simulation results and the Cramer–Rao bound (CRB) analysis. Experiments are further conducted on an artificial lateral line prototype, consisting of six millimeter-scale ionic polymer–metal composite sensors with intra-sensor spacing optimized with CRB analysis. Consistent with simulation results, the experimental results show that both GN and NR schemes are able to simultaneously estimate the source location, vibration amplitude and orientation with comparable precision. Specifically, the maximum localization error is less than 5% of the body length (BL) when the source is within the distance of one BL. Experimental results have also shown that the proposed schemes are superior to the beamforming method, one of the most competitive approaches reported in literature, in terms of accuracy and computational efficiency. (paper)

  1. A method for estimating the relative degree of saponification of xanthophyll sources and feedstuffs.

    Science.gov (United States)

    Fletcher, D L

    2006-05-01

    Saponification of xanthophyll esters in various feed sources has been shown to improve pigmentation efficiency in broiler skin and egg yolks. Three trials were conducted to evaluate a rapid liquid chromatography procedure for estimating the relative degree of xanthophyll saponification using samples of yellow corn, corn gluten meal, alfalfa, and 6 commercially available marigold meal concentrates. In each trial, samples were extracted using a modification of the 1984 Association of Official Analytical Chemists hot saponification procedure with and without the addition of KOH. A comparison of the chromatography results was used to estimate percent saponification of the original sample by dividing the nonsaponified extraction values by the saponified extraction values. A comparison of the percent saponified xanthophylls for each product (mg/kg) was: yellow corn, 101; corn gluten meal, 78; alfalfa, 97.9; and marigold concentrates A through F, 99.8, 4.6, 99.0, 95.6, 96.8, and 6.6, respectively. These results indicate that a modification of the 1984 Association of Official Analytical Chemists procedure and liquid column chromatography can be used to quickly verify saponification and can be used to estimate the relative degree of saponification of an unknown xanthophyll source.

  2. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    Science.gov (United States)

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  3. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  4. Wavelet denoising of multiframe optical coherence tomography data.

    Science.gov (United States)

    Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2012-03-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

  5. Efficient regularization with wavelet sparsity constraints in photoacoustic tomography

    Science.gov (United States)

    Frikel, Jürgen; Haltmeier, Markus

    2018-02-01

    In this paper, we consider the reconstruction problem of photoacoustic tomography (PAT) with a flat observation surface. We develop a direct reconstruction method that employs regularization with wavelet sparsity constraints. To that end, we derive a wavelet-vaguelette decomposition (WVD) for the PAT forward operator and a corresponding explicit reconstruction formula in the case of exact data. In the case of noisy data, we combine the WVD reconstruction formula with soft-thresholding, which yields a spatially adaptive estimation method. We demonstrate that our method is statistically optimal for white random noise if the unknown function is assumed to lie in any Besov-ball. We present generalizations of this approach and, in particular, we discuss the combination of PAT-vaguelette soft-thresholding with a total variation (TV) prior. We also provide an efficient implementation of the PAT-vaguelette transform that leads to fast image reconstruction algorithms supported by numerical results.

  6. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  7. A source term estimation method for a nuclear accident using atmospheric dispersion models

    DEFF Research Database (Denmark)

    Kim, Minsik; Ohba, Ryohji; Oura, Masamichi

    2015-01-01

    The objective of this study is to develop an operational source term estimation (STE) method applicable for a nuclear accident like the incident that occurred at the Fukushima Dai-ichi nuclear power station in 2011. The new STE method presented here is based on data from atmospheric dispersion...... models and short-range observational data around the nuclear power plants.The accuracy of this method is validated with data from a wind tunnel study that involved a tracer gas release from a scaled model experiment at Tokai Daini nuclear power station in Japan. We then use the methodology developed...... and validated through the effort described in this manuscript to estimate the release rate of radioactive material from the Fukushima Dai-ichi nuclear power station....

  8. Real-time software for multi-isotopic source term estimation

    International Nuclear Information System (INIS)

    Goloubenkov, A.; Borodin, R.; Sohier, A.

    1996-01-01

    Consideration is given to development of software for one of crucial components of the RODOS - assessment of the source rate (SR) from indirect measurements. Four components of the software are described in the paper. First component is a GRID system, which allow to prepare stochastic meteorological and radioactivity fields using measured data. Second part is a model of atmospheric transport which can be adapted for emulation of practically any gamma dose/spectrum detectors. The third one is a method which allows space-time and quantitative discrepancies in measured and modelled data to be taken into account simultaneously. It bases on the preference scheme selected by an expert. Last component is a special optimization method for calculation of multi-isotopic SR and its uncertainties. Results of a validation of the software using tracer experiments data and Chernobyl source estimation for main dose-forming isotopes are enclosed in the paper

  9. Wavelet modeling of signals for non-destructive testing of concretes

    International Nuclear Information System (INIS)

    Shao, Zhixue; Shi, Lihua; Cai, Jian

    2011-01-01

    In a non-destructive test of concrete structures, ultrasonic pulses are commonly used to detect damage or embedded objects from their reflections. A wavelet modeling method is proposed here to identify the main reflections and to remove the interferences in the detected ultrasonic waves. This method assumes that if the structure is stimulated by a wavelet function with good time–frequency localization ability, the detected signal is a combination of time-delayed and amplitude-attenuated wavelets. Therefore, modeling of the detected signal by wavelets can give a straightforward and simple model of the original signal. The central time and amplitude of each wavelet represent the position and amplitude of the reflections in the detected structure. A signal processing method is also proposed to estimate the structure response to wavelet excitation from its response to a high-voltage pulse with a sharp leading edge. A signal generation card with a compact peripheral component interconnect extension for instrumentation interface is designed to produce this high-voltage pulse. The proposed method is applied to synthesized aperture focusing technology of concrete specimens and the image results are provided

  10. Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering

    Directory of Open Access Journals (Sweden)

    Qiang Guo

    2018-01-01

    Full Text Available In modern electronic warfare, multiple input multiple output (MIMO radar has become an important tool for electronic reconnaissance and intelligence transmission because of its anti-stealth, high resolution, low intercept and anti-destruction characteristics. As a common MIMO radar signal, discrete frequency coding waveform (DFCW has a serious overlap of both time and frequency, so it cannot be directly used in the current radar signal separation problems. The existing fuzzy clustering algorithms have problems in initial value selection, low convergence rate and local extreme values which will lead to the low accuracy of the mixing matrix estimation. Consequently, a novel mixing matrix estimation algorithm based on data field and improved fuzzy C-means (FCM clustering is proposed. First of all, the sparsity and linear clustering characteristics of the time–frequency domain MIMO radar signals are enhanced by using the single-source principal value of complex angular detection. Secondly, the data field uses the potential energy information to analyze the particle distribution, thus design a new clustering number selection scheme. Then the particle swarm optimization algorithm is introduced to improve the iterative clustering process of FCM, and finally get the estimated value of the mixing matrix. The simulation results show that the proposed algorithm improves both the estimation accuracy and the robustness of the mixing matrix.

  11. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Directory of Open Access Journals (Sweden)

    K. Yao

    2007-12-01

    Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.

  12. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang

    2015-11-01

    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  13. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    Science.gov (United States)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03

  14. Accuracy and Sources of Error for an Angle Independent Volume Flow Estimator

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Hansen, Peter Møller

    2014-01-01

    This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors....... A BK Medical UltraView 800 ultrasound scanner with a 9 MHz linear array transducer is used to obtain Vector Flow Imaging sequences of a superficial part of the fistulas. Cross-sectional diameters of each fistu la are measured on B-mode images by rotating the scan plane 90 degrees. The major axis...

  15. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Construction of wavelets with composite dilations

    International Nuclear Information System (INIS)

    Wu Guochang; Li Zhiqiang; Cheng Zhengxing

    2009-01-01

    In order to overcome classical wavelets' shortcoming in image processing problems, people developed many producing systems, which built up wavelet family. In this paper, the notion of AB-multiresolution analysis is generalized, and the corresponding theory is developed. For an AB-multiresolution analysis associated with any expanding matrices, we deduce that there exists a singe scaling function in its reducing subspace. Under some conditions, wavelets with composite dilations can be gotten by AB-multiresolution analysis, which permits the existence of fast implementation algorithm. Then, we provide an approach to design the wavelets with composite dilations by classic wavelets. Our way consists of separable and partly nonseparable cases. In each section, we construct all kinds of examples with nice properties to prove our theory.

  17. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  18. Some applications of wavelets to physics

    International Nuclear Information System (INIS)

    Thompson, C.R.

    1992-01-01

    A thorough description of a fast wavelet transform algorithm (FWT) and its inverse (IFWT) are given. The effects of noise in the wavelet transform are studied, in particular the effects on signal reconstruction. A model for additive white noise on the coefficients is presented along with two methods that can help to suppress the effects of noise corruption of the signal. Problems of improper sampling are studied, including the propagation of uncertainty through the FWT and IFWT. Interpolation techniques and data compression are also studied. The FWT and IFWT are generalized for analysis of two dimensional images. Methods for edge detection are discussed as well as contrast improvement and data compression. Finally, wavelets are applied to electromagnetic wave propagation problems. Formulas relating the wavelet and Fourier transforms are given, and expansions of time-dependent electromagnetic fields using both fixed and moving wavelet bases are studied

  19. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    Science.gov (United States)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

  20. Complex Wavelet Based Modulation Analysis

    DEFF Research Database (Denmark)

    Luneau, Jean-Marc; Lebrun, Jérôme; Jensen, Søren Holdt

    2008-01-01

    Low-frequency modulation of sound carry important information for speech and music. The modulation spectrum i commonly obtained by spectral analysis of the sole temporal envelopes of the sub-bands out of a time-frequency analysis. Processing in this domain usually creates undesirable distortions...... polynomial trends. Moreover an analytic Hilbert-like transform is possible with complex wavelets implemented as an orthogonal filter bank. By working in an alternative transform domain coined as “Modulation Subbands”, this transform shows very promising denoising capabilities and suggests new approaches for joint...

  1. Wavelets and the Lifting Scheme

    DEFF Research Database (Denmark)

    la Cour-Harbo, Anders; Jensen, Arne

    The objective of this article is to give a concise introduction to the discrete wavelet transform (DWT) based on a technique called lifting. The lifting technique allows one to give an elementary, but rigorous, definition of the DWT, with modest requirements on the reader. A basic knowledge...... of linear algebra and signal processing will suffice. The lifting based definition is equivalent to the usual filer bank based definition of the DWT. The article does not discuss applications in any detail. The reader is referred to other articles in this collection....

  2. Wavelets and the lifting scheme

    DEFF Research Database (Denmark)

    la Cour-Harbo, Anders; Jensen, Arne

    2012-01-01

    The objective of this article is to give a concise introduction to the discrete wavelet transform (DWT) based on a technique called lifting. The lifting technique allows one to give an elementary, but rigorous, definition of the DWT, with modest requirements on the reader. A basic knowledge...... of linear algebra and signal processing will suffice. The lifting based definition is equivalent to the usual filer bank based definition of the DWT. The article does not discuss applications in any detail. The reader is referred to other articles in this collection....

  3. Wavelets and the lifting scheme

    DEFF Research Database (Denmark)

    la Cour-Harbo, Anders; Jensen, Arne

    2009-01-01

    The objective of this article is to give a concise introduction to the discrete wavelet transform (DWT) based on a technique called lifting. The lifting technique allows one to give an elementary, but rigorous, definition of the DWT, with modest requirements on the reader. A basic knowledge...... of linear algebra and signal processing will suffice. The lifting based definition is equivalent to the usual filer bank based definition of the DWT. The article does not discuss applications in any detail. The reader is referred to other articles in this collection....

  4. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations

  5. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  6. Estimation of the Void Fraction in the moderator cell of the Cold Neutron Source

    International Nuclear Information System (INIS)

    Choi, Jungwoon; Kim, Young-ki

    2015-01-01

    To estimate the average void fraction in the liquid hydrogen, the Kazimi and Chen correlation is used with its modified method suggested by R.E. Williams in NBSR. Since the multiplying number can be changed along the operation condition and working fluid, the different figure is applied to estimate the average void fraction in the different moderator cell shape. This approach is checked with the void fraction measurement results from the HANARO-CNS mock-up test. Owing to national research demands on cold neutron beam utilization, the Cold Neutron Research Facility had been and operated for neuron scientists all over the world. In HANARO, the CNS facility has been operated since 2009. The actual void fraction, which is the one of dominant factors affecting the cold neutron flux, is difficult to know without the real measurement performed at the cryogenic temperature using the same moderator medium. Accordingly, the two-phase mock-up test in the CNS-IPA (In-pool assembly) had been performed using the liquid hydrogen in terms of the fluidity check, void fraction measurement, operation procedure set-up, and so on for the development of the HANARO-CNS. This paper presents the estimated void fraction in the different operating conditions and geometrical shape in the comparison with the measurement data of the void fraction in the full-scale mockup test based on the Kazimi and Chen correlation. This approach is applied to estimate the average void fraction in the newly designed moderator cell using the liquid hydrogen as a working fluid in the two-phase thermosiphon. From this calculation result, the estimated average void fraction will be used to design the optimized cold neutron source to produce the maximum cold neutron flux within the desired wavelength

  7. Estimation of the Void Fraction in the moderator cell of the Cold Neutron Source

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jungwoon; Kim, Young-ki [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    To estimate the average void fraction in the liquid hydrogen, the Kazimi and Chen correlation is used with its modified method suggested by R.E. Williams in NBSR. Since the multiplying number can be changed along the operation condition and working fluid, the different figure is applied to estimate the average void fraction in the different moderator cell shape. This approach is checked with the void fraction measurement results from the HANARO-CNS mock-up test. Owing to national research demands on cold neutron beam utilization, the Cold Neutron Research Facility had been and operated for neuron scientists all over the world. In HANARO, the CNS facility has been operated since 2009. The actual void fraction, which is the one of dominant factors affecting the cold neutron flux, is difficult to know without the real measurement performed at the cryogenic temperature using the same moderator medium. Accordingly, the two-phase mock-up test in the CNS-IPA (In-pool assembly) had been performed using the liquid hydrogen in terms of the fluidity check, void fraction measurement, operation procedure set-up, and so on for the development of the HANARO-CNS. This paper presents the estimated void fraction in the different operating conditions and geometrical shape in the comparison with the measurement data of the void fraction in the full-scale mockup test based on the Kazimi and Chen correlation. This approach is applied to estimate the average void fraction in the newly designed moderator cell using the liquid hydrogen as a working fluid in the two-phase thermosiphon. From this calculation result, the estimated average void fraction will be used to design the optimized cold neutron source to produce the maximum cold neutron flux within the desired wavelength.

  8. Improving risk estimates of runoff producing areas: formulating variable source areas as a bivariate process.

    Science.gov (United States)

    Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd

    2014-05-01

    Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Estimated Dietary Polyphenol Intake and Major Food and Beverage Sources among Elderly Japanese

    Directory of Open Access Journals (Sweden)

    Chie Taguchi

    2015-12-01

    Full Text Available Estimating polyphenol intake contributes to the understanding of polyphenols’ health benefits. However, information about human polyphenol intake is scarce, especially in the elderly. This study aimed to estimate the dietary intake and major sources of polyphenols and to determine whether there is any relationship between polyphenol intake and micronutrient intake in healthy elderly Japanese. First, 610 subjects (569 men, 41 women; aged 67.3 ± 6.1 years completed food frequency questionnaires. We then calculated their total polyphenol intake using our polyphenol content database. Their average total polyphenol intake was 1492 ± 665 mg/day, the greatest part of which was provided by beverages (79.1%. The daily polyphenol intake differed largely among individuals (183–4854 mg/day, also attributable mostly to beverage consumption. Coffee (43.2% and green tea (26.6% were the major sources of total polyphenol; the top 20 food items accounted for >90%. The polyphenol intake did not strongly correlate with the intake of any micronutrient, suggesting that polyphenols may exert health benefits independently of nutritional intake. The polyphenol intake in this elderly population was slightly higher than previous data in Japanese adults, and beverages such as coffee and green tea contributed highly to the intake.

  10. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  11. Controlled source electromagnetic data analysis with seismic constraints and rigorous uncertainty estimation in the Black Sea

    Science.gov (United States)

    Gehrmann, R. A. S.; Schwalenberg, K.; Hölz, S.; Zander, T.; Dettmer, J.; Bialas, J.

    2016-12-01

    In 2014 an interdisciplinary survey was conducted as part of the German SUGAR project in the Western Black Sea targeting gas hydrate occurrences in the Danube Delta. Marine controlled source electromagnetic (CSEM) data were acquired with an inline seafloor-towed array (BGR), and a two-polarization horizontal ocean-bottom source and receiver configuration (GEOMAR). The CSEM data are co-located with high-resolution 2-D and 3-D seismic reflection data (GEOMAR). We present results from 2-D regularized inversion (MARE2DEM by Kerry Key), which provides a smooth model of the electrical resistivity distribution beneath the source and multiple receivers. The 2-D approach includes seafloor topography and structural constraints from seismic data. We estimate uncertainties from the regularized inversion and compare them to 1-D Bayesian inversion results. The probabilistic inversion for a layered subsurface treats the parameter values and the number of layers as unknown by applying reversible-jump Markov-chain Monte Carlo sampling. A non-diagonal data covariance matrix obtained from residual error analysis accounts for correlated errors. The resulting resistivity models show generally high resistivity values between 3 and 10 Ωm on average which can be partly attributed to depleted pore water salinities due to sea-level low stands in the past, and locally up to 30 Ωm which is likely caused by gas hydrates. At the base of the gas hydrate stability zone resistivities rise up to more than 100 Ωm which could be due to gas hydrate as well as a layer of free gas underneath. However, the deeper parts also show the largest model parameter uncertainties. Archie's Law is used to derive estimates of the gas hydrate saturation, which vary between 30 and 80% within the anomalous layers considering salinity and porosity profiles from a distant DSDP bore hole.

  12. Estimating the prevalence of illicit opioid use in New York City using multiple data sources

    Directory of Open Access Journals (Sweden)

    McNeely Jennifer

    2012-06-01

    Full Text Available Abstract Background Despite concerns about its health and social consequences, little is known about the prevalence of illicit opioid use in New York City. Individuals who misuse heroin and prescription opioids are known to bear a disproportionate burden of morbidity and mortality. Service providers and public health authorities are challenged to provide appropriate interventions in the absence of basic knowledge about the size and characteristics of this population. While illicit drug users are underrepresented in population-based surveys, they may be identified in multiple administrative data sources. Methods We analyzed large datasets tracking hospital inpatient and emergency room admissions as well as drug treatment and detoxification services utilization. These were applied in combination with findings from a large general population survey and administrative records tracking prescriptions, drug overdose deaths, and correctional health services, to estimate the prevalence of heroin and non-medical prescription opioid use among New York City residents in 2006. These data were further applied to a descriptive analysis of opioid users entering drug treatment and hospital-based medical care. Results These data sources identified 126,681 cases of opioid use among New York City residents in 2006. After applying adjustment scenarios to account for potential overlap between data sources, we estimated over 92,000 individual opioid users. By contrast, just 21,600 opioid users initiated drug treatment in 2006. Opioid users represented 4 % of all individuals hospitalized, and over 44,000 hospitalizations during the calendar year. Conclusions Our findings suggest that innovative approaches are needed to provide adequate services to this sizeable population of opioid users. Given the observed high rates of hospital services utilization, greater integration of drug services into medical settings could be one component of an effective approach to

  13. Estimating national crop yield potential and the relevance of weather data sources

    Science.gov (United States)

    Van Wart, Justin

    2011-12-01

    To determine where, when, and how to increase yields, researchers often analyze the yield gap (Yg), the difference between actual current farm yields and crop yield potential. Crop yield potential (Yp) is the yield of a crop cultivar grown under specific management limited only by temperature and solar radiation and also by precipitation for water limited yield potential (Yw). Yp and Yw are critical components of Yg estimations, but are very difficult to quantify, especially at larger scales because management data and especially daily weather data are scarce. A protocol was developed to estimate Yp and Yw at national scales using site-specific weather, soils and management data. Protocol procedures and inputs were evaluated to determine how to improve accuracy of Yp, Yw and Yg estimates. The protocol was also used to evaluate raw, site-specific and gridded weather database sources for use in simulations of Yp or Yw. The protocol was applied to estimate crop Yp in US irrigated maize and Chinese irrigated rice and Yw in US rainfed maize and German rainfed wheat. These crops and countries account for >20% of global cereal production. The results have significant implications for past and future studies of Yp, Yw and Yg. Accuracy of national long-term average Yp and Yw estimates was significantly improved if (i) > 7 years of simulations were performed for irrigated and > 15 years for rainfed sites, (ii) > 40% of nationally harvested area was within 100 km of all simulation sites, (iii) observed weather data coupled with satellite derived solar radiation data were used in simulations, and (iv) planting and harvesting dates were specified within +/- 7 days of farmers actual practices. These are much higher standards than have been applied in national estimates of Yp and Yw and this protocol is a substantial step in making such estimates more transparent, robust, and straightforward. Finally, this protocol may be a useful tool for understanding yield trends and directing

  14. Estimated population exposure from nuclear power production and other radiation sources

    International Nuclear Information System (INIS)

    Pochin, E.E.

    1976-01-01

    Estimates are given of the total radiation dose from all forms of ionizing radiation resulting from nuclear power reduction. A power consumption of 1kW per head of population, derived entirely from nuclear energy, would increase the average radiation exposure of the whole population from 100mrem per year from natural sources (plus about 40mrem per year from medical procedures and other artificial causes) by about 6mrem per year. The genetically signifificant component of this increase would be about 4mrem per year. Available estimates of harm from radiation would indicate that this would give a risk per year per million of population of about 1 fatal induced malignancy, about the same number of malignancies fully treatable by operation, and, after many generations, about the same number of inherited defects, of greater or less severity, per year. Accidental injuries, particularly in constructional and mining work, would cause an estimated 1 fatality and 50 other accidents annually. Indications are given of the number of fatalities and accidents involved in equal power production by alternative methods, and of the value and limitations of such numerical comparisons in reaching decisions on the development of future power programmes

  15. Investigation on method of estimating the excitation spectrum of vibration source

    International Nuclear Information System (INIS)

    Zhang Kun; Sun Lei; Lin Song

    2010-01-01

    In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)

  16. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  17. A nonlinear wavelet method for data smoothing of low-level gamma-ray spectra

    International Nuclear Information System (INIS)

    Gang Xiao; Li Deng; Benai Zhang; Jianshi Zhu

    2004-01-01

    A nonlinear wavelet method was designed for smoothing low-level gamma-ray spectra. The spectra of a 60 Co graduated radioactive source and a mixed soil sample were smoothed respectively according to this method and a 5 point smoothing method. The FWHM of 1,332 keV peak of 60 Co source and the absolute activities of 238 U of soil sample were calculated. The results show that the nonlinear wavelet method is better than the traditional method, with less loss of spectral peak and a more complete reduction of statistical fluctuation. (author)

  18. Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.

    Science.gov (United States)

    Zhou, Weidong; Gotman, Jean

    2004-01-01

    In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.

  19. A wavelet multiscale denoising algorithm for magnetic resonance (MR) images

    International Nuclear Information System (INIS)

    Yang, Xiaofeng; Fei, Baowei

    2011-01-01

    Based on the Radon transform, a wavelet multiscale denoising method is proposed for MR images. The approach explicitly accounts for the Rician nature of MR data. Based on noise statistics we apply the Radon transform to the original MR images and use the Gaussian noise model to process the MR sinogram image. A translation invariant wavelet transform is employed to decompose the MR 'sinogram' into multiscales in order to effectively denoise the images. Based on the nature of Rician noise we estimate noise variance in different scales. For the final denoised sinogram we apply the inverse Radon transform in order to reconstruct the original MR images. Phantom, simulation brain MR images, and human brain MR images were used to validate our method. The experiment results show the superiority of the proposed scheme over the traditional methods. Our method can reduce Rician noise while preserving the key image details and features. The wavelet denoising method can have wide applications in MRI as well as other imaging modalities

  20. A Study of Coherent Structures using Wavelet Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaspersen, J H

    1996-05-01

    Turbulence is important in many fields of engineering, for example in estimating drag or minimizing drag on surfaces. It is known that turbulent flows contain coherent structures, which implies that a turbulent shear flow can be decomposed into coherent structures and random motion. It is generally accepted that coherent structures are responsible for significant transport of mass, heat and momentum. This doctoral thesis presents and discusses a new algorithm to detect coherent structures based on Wavelet transformations, a transform similar to the Fourier transform but providing information on both frequency and scale. The new detection scheme does not require any predefined threshold or integration time, and its general performance is found to be very good. Wind tunnel experiments were performed to obtain data for analysis. Scalograms resulting from the Wavelet transform show clearly that coherent structures exist in turbulent flows. These structures are shown to contribute considerably to the shear stresses. The contribution from the organized motion to the normal stresses close to the wall appears to be considerably smaller. Direct Navier Stokes (DNS) channel flow seems to be more organized than Zero Pressure Gradient (ZPG) flows. The topology of ZPG flows was studied using a multiple hot wire arrangement and conditionally averaged streamlines based on detections from the Wavelet method are presented. It is shown that the coherent structures produce large amounts of both vorticity and strain at the detection point. 56 refs., 92 figs., 3 tabs.

  1. Wavelet analysis of the nuclear phase space

    International Nuclear Information System (INIS)

    Jouault, B.; Sebille, F.; De La Mota, V.

    1997-01-01

    The description of complex systems requires to select and to compact the relevant information. The wavelet theory constitutes an appropriate framework for defining adapted representation bases obtained from a controlled hierarchy of approximations. The optimization of the wavelet analysis depend mainly on the chosen analysis method and wavelet family. Here the analysis of the harmonic oscillator wave function was carried out by considering a Spline bi-orthogonal wavelet base which satisfy the symmetry requirements and can be approximated by simple analytical functions. The goal of this study was to determine a selection criterion allowing to minimize the number of elements considered for an optimal description of the analysed functions. An essential point consists in utilization of the wavelet complementarity and of the scale functions in order to reproduce the oscillating and peripheral parts of the wave functions. The wavelet base representation allows defining a sequence of approximations of the density matrix. Thus, this wavelet representation of the density matrix offers an optimal base for describing both the static nuclear configurations and their time evolution. This information compacting procedure is performed in a controlled manner and preserves the structure of the system wave functions and consequently some of its quantum properties

  2. Applications of a fast, continuous wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  3. Neuro-Fuzzy Wavelet Based Adaptive MPPT Algorithm for Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Syed Zulqadar Hassan

    2017-03-01

    Full Text Available An intelligent control of photovoltaics is necessary to ensure fast response and high efficiency under different weather conditions. This is often arduous to accomplish using traditional linear controllers, as photovoltaic systems are nonlinear and contain several uncertainties. Based on the analysis of the existing literature of Maximum Power Point Tracking (MPPT techniques, a high performance neuro-fuzzy indirect wavelet-based adaptive MPPT control is developed in this work. The proposed controller combines the reasoning capability of fuzzy logic, the learning capability of neural networks and the localization properties of wavelets. In the proposed system, the Hermite Wavelet-embedded Neural Fuzzy (HWNF-based gradient estimator is adopted to estimate the gradient term and makes the controller indirect. The performance of the proposed controller is compared with different conventional and intelligent MPPT control techniques. MATLAB results show the superiority over other existing techniques in terms of fast response, power quality and efficiency.

  4. Identification of weak nonlinearities on damping and stiffness by the continuous wavelet transform

    Science.gov (United States)

    Ta, Minh-Nghi; Lardiès, Joseph

    2006-05-01

    We consider the free response of a nonlinear vibrating system. Using the ridges and skeletons of the continuous wavelet transform, we identify weak nonlinearities on damping and stiffness and estimate their physical parameters. The crucial choice of the son wavelet function is obtained using an optimization technique based on the entropy of the continuous wavelet transform. The method is applied to simulated single-degree-of-freedom systems and multi-degree-of-freedom systems with nonlinearities on damping and stiffness. Experimental validation of the nonlinear identification and parameter estimation method is presented. The experimental system is a clamped beam with nonlinearities on damping and stiffness and these nonlinearities are identified and quantified from a displacement sensor.

  5. Reconciling apparent inconsistencies in estimates of terrestrial CO2 sources and sinks

    International Nuclear Information System (INIS)

    House, J.I.; Prentice, I.C.; Heimann, M.; Ramankutty, N.

    2003-01-01

    The magnitude and location of terrestrial carbon sources and sinks remains subject to large uncertainties. Estimates of terrestrial CO 2 fluxes from ground-based inventory measurements typically find less carbon uptake than inverse model calculations based on atmospheric CO 2 measurements, while a wide range of results have been obtained using models of different types. However, when full account is taken of the processes, pools, time scales and geographic areas being measured, the different approaches can be understood as complementary rather than inconsistent, and can provide insight as to the contribution of various processes to the terrestrial carbon budget. For example, quantitative differences between atmospheric inversion model estimates and forest inventory estimates in northern extratropical regions suggest that carbon fluxes to soils (often not accounted for in inventories), and into non-forest vegetation, may account for about half of the terrestrial uptake. A consensus of inventory and inverse methods indicates that, in the 1980s, northern extratropical land regions were a large net sink of carbon, and the tropics were approximately neutral (albeit with high uncertainty around the central estimate of zero net flux). The terrestrial flux in southern extratropical regions was small. Book-keeping model studies of the impacts of land-use change indicated a large source in the tropics and almost zero net flux for most northern extratropical regions; similar land use change impacts were also recently obtained using process-based models. The difference between book-keeping land-use change model studies and inversions or inventories was previously interpreted as a 'missing' terrestrial carbon uptake. Land-use change studies do not account for environmental or many management effects (which are implicitly included in inventory and inversion methods). Process-based model studies have quantified the impacts of CO 2 fertilisation and climate change in addition to

  6. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    Science.gov (United States)

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is

  7. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  8. Adapted wavelet analysis from theory to software

    CERN Document Server

    Wickerhauser, Mladen Victor

    1994-01-01

    This detail-oriented text is intended for engineers and applied mathematicians who must write computer programs to perform wavelet and related analysis on real data. It contains an overview of mathematical prerequisites and proceeds to describe hands-on programming techniques to implement special programs for signal analysis and other applications. From the table of contents: - Mathematical Preliminaries - Programming Techniques - The Discrete Fourier Transform - Local Trigonometric Transforms - Quadrature Filters - The Discrete Wavelet Transform - Wavelet Packets - The Best Basis Algorithm - Multidimensional Library Trees - Time-Frequency Analysis - Some Applications - Solutions to Some of the Exercises - List of Symbols - Quadrature Filter Coefficients

  9. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  10. Estimation of the Plant Time Constant of Current-Controlled Voltage Source Converters

    DEFF Research Database (Denmark)

    Vidal, Ana; Yepes, Alejandro G.; Malvar, Jano

    2014-01-01

    Precise knowledge of the plant time constant is essential to perform a thorough analysis of the current control loop in voltage source converters (VSCs). As the loop behavior can be significantly influenced by the VSC working conditions, the effects associated to converter losses should be included...... in the model, through an equivalent series resistance. In a recent work, an algorithm to identify this parameter was developed, considering the inductance value as known and practically constant. Nevertheless, the plant inductance can also present important uncertainties with respect to the inductance...... of the VSC interface filter measured at rated conditions. This paper extends that method so that both parameters of the plant time constant (resistance and inductance) are estimated. Such enhancement is achieved through the evaluation of the closed-loop transient responses of both axes of the synchronous...

  11. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    Science.gov (United States)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  12. An Investigation on Micro-Raman Spectra and Wavelet Data Analysis for Pemphigus Vulgaris Follow-up Monitoring

    OpenAIRE

    Camerlingo, Carlo; Zenone, Flora; Perna, Giuseppe; Capozzi, Vito; Cirillo, Nicola; Gaeta, Giovanni Maria; Lepore, Maria

    2008-01-01

    A wavelet multi-component decomposition algorithm has been used for data analysis of micro-Raman spectra of blood serum samples from patients affected by pemphigus vulgaris at different stages. Pemphigus is a chronic, autoimmune, blistering disease of the skin and mucous membranes with a potentially fatal outcome. Spectra were measured by means of a Raman confocal microspectrometer apparatus using the 632.8 nm line of a He-Ne laser source. A discrete wavelet transform decomposition method has...

  13. The cross wavelet and wavelet coherence analysis of spatio-temporal rainfall-groundwater system in Pingtung plain, Taiwan

    Science.gov (United States)

    Lin, Yuan-Chien; Yu, Hwa-Lung

    2013-04-01

    The increasing frequency and intensity of extreme rainfall events has been observed recently in Taiwan. Particularly, Typhoon Morakot, Typhoon Fanapi, and Typhoon Megi consecutively brought record-breaking intensity and magnitude of rainfalls to different locations of Taiwan in these two years. However, records show the extreme rainfall events did not elevate the amount of annual rainfall accordingly. Conversely, the increasing frequency of droughts has also been occurring in Taiwan. The challenges have been confronted by governmental agencies and scientific communities to come up with effective adaptation strategies for natural disaster reduction and sustainable environment establishment. Groundwater has long been a reliable water source for a variety of domestic, agricultural, and industrial uses because of its stable quantity and quality. In Taiwan, groundwater accounts for the largest proportion of all water resources for about 40%. This study plans to identify and quantify the nonlinear relationship between precipitation and groundwater recharge, find the non-stationary time-frequency relations between the variations of rainfall and groundwater levels to understand the phase difference of time series. Groundwater level data and over-50-years hourly rainfall records obtained from 20 weather stations in Pingtung Plain, Taiwan has been collected. Extract the space-time pattern by EOF method, which is a decomposition of a signal or data set in terms of orthogonal basis functions determined from the data for both time series and spatial patterns, to identify the important spatial pattern of groundwater recharge and using cross wavelet and wavelet coherence method to identify the relationship between rainfall and groundwater levels. Results show that EOF method can specify the spatial-temporal patterns which represents certain geological characteristics and other mechanisms of groundwater, and the wavelet coherence method can identify general correlation between

  14. A simple method for estimating potential source term bypass fractions from confinement structures

    International Nuclear Information System (INIS)

    Kalinich, D.A.; Paddleford, D.F.

    1997-01-01

    Confinement structures house many of the operating processes at the Savannah River Site (SRS). Under normal operating conditions, a confinement structure in conjunction with its associated ventilation systems prevents the release of radiological material to the environment. However, under potential accident conditions, the performance of the ventilation systems and integrity of the structure may be challenged. In order to calculate the radiological consequences associated with a potential accident (e.g. fires, explosion, spills, etc.), it is necessary to determine the fraction of the source term initially generated by the accident that escapes from the confinement structure to the environment. While it would be desirable to estimate the potential bypass fraction using sophisticated control-volume/flow path computer codes (e.g. CONTAIN, MELCOR, etc.) in order to take as much credit as possible for the mitigative effects of the confinement structure, there are many instances where using such codes is not tractable due to limits on the level-of-effort allotted to perform the analysis. Moreover, the current review environment, with its emphasis on deterministic/bounding-versus probabilistic/best-estimate-analysis discourages using analytical techniques that require the consideration of a large number of parameters. Discussed herein is a simplified control-volume/flow path approach for calculating source term bypass fraction that is amenable to solution in a spreadsheet or with a commercial mathematical solver (e.g. MathCad or Mathematica). It considers the effects of wind and fire pressure gradients on the structure, ventilation system operation, and Halon discharges. Simple models are used to characterize the engineered and non-engineered flow paths. By making judicious choices for the limited set of problem parameters, the results from this approach can be defended as bounding and conservative

  15. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  16. Data Sources for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  17. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  18. Estimated Daily Intake and Seasonal Food Sources of Quercetin in Japan

    Directory of Open Access Journals (Sweden)

    Haruno Nishimuro

    2015-04-01

    Full Text Available Quercetin is a promising food component, which can prevent lifestyle related diseases. To understand the dietary intake of quercetin in the subjects of a population-based cohort study and in the Japanese population, we first determined the quercetin content in foods available in the market during June and July in or near a town in Hokkaido, Japan. Red leaf lettuce, asparagus, and onions contained high amounts of quercetin derivatives. We then estimated the daily quercetin intake by 570 residents aged 20–92 years old in the town using a food frequency questionnaire (FFQ. The average and median quercetin intakes were 16.2 and 15.5 mg day−1, respectively. The quercetin intakes by men were lower than those by women; the quercetin intakes showed a low correlation with age in both men and women. The estimated quercetin intake was similar during summer and winter. Quercetin was mainly ingested from onions and green tea, both in summer and in winter. Vegetables, such as asparagus, green pepper, tomatoes, and red leaf lettuce, were good sources of quercetin in summer. Our results will help to elucidate the association between quercetin intake and risks of lifestyle-related diseases by further prospective cohort study and establish healthy dietary requirements with the consumption of more physiologically useful components from foods.

  19. Estimation of aluminum and argon activation sources in the HANARO coolant

    International Nuclear Information System (INIS)

    Jun, Byung Jin; Lee, Byung Chul; Kim, Myong Seop

    2010-01-01

    The activation products of aluminum and argon are key radionuclides for operational and environmental radiological safety during the normal operation of open-tank-in-pool type research reactors using aluminum-clad fuels. Their activities measured in the primary coolant and pool surface water of HANARO have been consistent. We estimated their sources from the measured activities and then compared these values with their production rates obtained by a core calculation. For each aluminum activation product, an equivalent aluminum thickness (EAT) in which its production rate is identical to its release rate into the coolant is determined. For the argon activation calculation, the saturated argon concentration in the water at the temperature of the pool surface is assumed. The EATs are 5680, 266 and 1.2 nm, respectively, for Na-24, Mg-27 and Al-28, which are much larger than the flight lengths of the respective recoil nuclides. These values coincide with the water solubility levels and with the half-lives. The EAT for Na-24 is similar to the average oxide layer thickness (OLT) of fuel cladding as well; hence, the majority of them in the oxide layer may be released to the coolant. However, while the average OLT clearly increases with the fuel burn-up during an operation cycle, its effect on the pool-top radiation is not distinguishable. The source of Ar-41 is in good agreement with the calculated reaction rate of Ar-40 dissolved in the coolant

  20. An appreciation of the events, models and data used for LMFBR radiological source term estimations

    International Nuclear Information System (INIS)

    Keir, D.; Clough, P.N.

    1989-01-01

    In this report, the events, models and data currently available for analysis of accident source terms in liquid metal cooled fast neutron reactors are reviewed. The types of hypothetical accidents considered are the low probability, more extreme types of severe accident, involving significant degradation of the core and which may lead to the release of radionuclides. The base case reactor design considered is a commercial scale sodium pool reactor of the CDFR type. The feasibility of an integrated calculational approach to radionuclide transport and speciation (such as is used for LWR accident analysis) is explored. It is concluded that there is no fundamental obstacle, in terms of scientific data or understanding of the phenomena involved, to such an approach. However this must be regarded as a long-term goal because of the large amount of effort still required to advance development to a stage comparable with LWR studies. Particular aspects of LMFBR severe accident phenomenology which require attention are the behaviour of radionuclides during core disruptive accident bubble formation and evolution, and during the less rapid sequences of core melt under sodium. The basic requirement for improved thermal hydraulic modelling of core, coolant and structural materials, in these and other scenarios, is highlighted as fundamental to the accuracy and realism of source term estimations. The coupling of such modelling to that of radionuclide behaviour is seen as the key to future development in this area

  1. Estimation of lead sources in a Japanese cedar ecosystem using stable isotope analysis

    International Nuclear Information System (INIS)

    Itoh, Yuko; Noguchi, Kyotaro; Takahashi, Masamichi; Okamoto, Toru; Yoshinaga, Shuichiro

    2007-01-01

    Anthropogenic Pb affects the environment worldwide. To understand its effect on forest ecosystem, Pb isotope ratios were determined in precipitation, various components of vegetation, the forest floor, soil and parent material in a Japanese cedar (Cryptomeria japonica D. Don) forest stand. The average 206 Pb/ 207 Pb ratio in bulk precipitation was 1.14 ± 0.01 (mean ± SD), whereas that in the subsoil (20-130 cm) was 1.18 ± 0.01. Intermediate ratios ranging from 1.15 to 1.16 were observed in the vegetation, the forest floor, and the surface soil (0-10 cm). Using the 206 Pb/ 207 Pb ratios, the contribution of anthropogenic sources to Pb accumulated in the forest were estimated by the simple binary mixing model. Sixty-two percent of the Pb in the forest floor, 71% in the vegetation, and 55% in the surface soil (0-10 cm) originated from anthropogenic sources, but only 16% in the sub-surface soil (10-20 cm) was anthropogenic. These results suggest that internal Pb cycling occurs mainly between surface soil and vegetation in a Japanese cedar ecosystem, and that anthropogenic Pb strongly influences Pb cycling. Although the Japanese cedar ecosystem has a shallow forest floor, very little atmospherically derived Pb migrated downward over 10 cm in depth

  2. Source estimation for propagation processes on complex networks with an application to delays in public transportation systems

    NARCIS (Netherlands)

    Manitz, J. (Juliane); Harbering, J. (Jonas); M.E. Schmidt (Marie); T. Kneib (Thomas); A. Schöbel (Anita)

    2017-01-01

    textabstractThe correct identification of the source of a propagation process is crucial in many research fields. As a specific application, we consider source estimation of delays in public transportation networks. We propose two approaches: an effective distance median and a backtracking method.

  3. Estimating the costs of work-related accidents and ill-health: An analysis of European data sources

    NARCIS (Netherlands)

    Heuvel, S. van den; Zwaan, L. van der; Dam, L. van; Oude Hengel, K.M.; Eekhout, I.; Emmerik, M.L. van; Oldenburg, C.; Brück, C.; Janowski, P.; Wilhelm, C.

    2017-01-01

    This report presents the results of a survey of national and international data sources on the costs of work-related injuries, illnesses and deaths. The aim was to evaluate the quality and comparability of different sources as a first step towards estimating the costs of accidents and ill-health at

  4. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-19

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.

  5. Stutter seismic source

    Energy Technology Data Exchange (ETDEWEB)

    Gumma, W. H.; Hughes, D. R.; Zimmerman, N. S.

    1980-08-12

    An improved seismic prospecting system comprising the use of a closely spaced sequence of source initiations at essentially the same location to provide shorter objective-level wavelets than are obtainable with a single pulse. In a preferred form, three dynamite charges are detonated in the same or three closely spaced shot holes to generate a downward traveling wavelet having increased high frequency content and reduced content at a peak frequency determined by initial testing.

  6. Estimation of nitrite in source-separated nitrified urine with UV spectrophotometry.

    Science.gov (United States)

    Mašić, Alma; Santos, Ana T L; Etter, Bastian; Udert, Kai M; Villez, Kris

    2015-11-15

    Monitoring of nitrite is essential for an immediate response and prevention of irreversible failure of decentralized biological urine nitrification reactors. Although a few sensors are available for nitrite measurement, none of them are suitable for applications in which both nitrite and nitrate are present in very high concentrations. Such is the case in collected source-separated urine, stabilized by nitrification for long-term storage. Ultraviolet (UV) spectrophotometry in combination with chemometrics is a promising option for monitoring of nitrite. In this study, an immersible in situ UV sensor is investigated for the first time so to establish a relationship between UV absorbance spectra and nitrite concentrations in nitrified urine. The study focuses on the effects of suspended particles and saturation on the absorbance spectra and the chemometric model performance. Detailed analysis indicates that suspended particles in nitrified urine have a negligible effect on nitrite estimation, concluding that sample filtration is not necessary as pretreatment. In contrast, saturation due to very high concentrations affects the model performance severely, suggesting dilution as an essential sample preparation step. However, this can also be mitigated by simple removal of the saturated, lower end of the UV absorbance spectra, and extraction of information from the secondary, weaker nitrite absorbance peak. This approach allows for estimation of nitrite with a simple chemometric model and without sample dilution. These results are promising for a practical application of the UV sensor as an in situ nitrite measurement in a urine nitrification reactor given the exceptional quality of the nitrite estimates in comparison to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Microwave implementation of two-source energy balance approach for estimating evapotranspiration

    Directory of Open Access Journals (Sweden)

    T. R. H. Holmes

    2018-02-01

    Full Text Available A newly developed microwave (MW land surface temperature (LST product is used to substitute thermal infrared (TIR-based LST in the Atmosphere–Land Exchange Inverse (ALEXI modeling framework for estimating evapotranspiration (ET from space. ALEXI implements a two-source energy balance (TSEB land surface scheme in a time-differential approach, designed to minimize sensitivity to absolute biases in input records of LST through the analysis of the rate of temperature change in the morning. Thermal infrared retrievals of the diurnal LST curve, traditionally from geostationary platforms, are hindered by cloud cover, reducing model coverage on any given day. This study tests the utility of diurnal temperature information retrieved from a constellation of satellites with microwave radiometers that together provide six to eight observations of Ka-band brightness temperature per location per day. This represents the first ever attempt at a global implementation of ALEXI with MW-based LST and is intended as the first step towards providing all-weather capability to the ALEXI framework. The analysis is based on 9-year-long, global records of ALEXI ET generated using both MW- and TIR-based diurnal LST information as input. In this study, the MW-LST (MW-based LST sampling is restricted to the same clear-sky days as in the IR-based implementation to be able to analyze the impact of changing the LST dataset separately from the impact of sampling all-sky conditions. The results show that long-term bulk ET estimates from both LST sources agree well, with a spatial correlation of 92 % for total ET in the Europe–Africa domain and agreement in seasonal (3-month totals of 83–97 % depending on the time of year. Most importantly, the ALEXI-MW (MW-based ALEXI also matches ALEXI-IR (IR-based ALEXI very closely in terms of 3-month inter-annual anomalies, demonstrating its ability to capture the development and extent of drought conditions. Weekly ET output

  8. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  9. Digital transceiver implementation for wavelet packet modulation

    Science.gov (United States)

    Lindsey, Alan R.; Dill, Jeffrey C.

    1998-03-01

    Current transceiver designs for wavelet-based communication systems are typically reliant on analog waveform synthesis, however, digital processing is an important part of the eventual success of these techniques. In this paper, a transceiver implementation is introduced for the recently introduced wavelet packet modulation scheme which moves the analog processing as far as possible toward the antenna. The transceiver is based on the discrete wavelet packet transform which incorporates level and node parameters for generalized computation of wavelet packets. In this transform no particular structure is imposed on the filter bank save dyadic branching, and a maximum level which is specified a priori and dependent mainly on speed and/or cost considerations. The transmitter/receiver structure takes a binary sequence as input and, based on the desired time- frequency partitioning, processes the signal through demultiplexing, synthesis, analysis, multiplexing and data determination completely in the digital domain - with exception of conversion in and out of the analog domain for transmission.

  10. Numerical shaping of the ultrasonic wavelet

    International Nuclear Information System (INIS)

    Bonis, M.

    1991-01-01

    Improving the performance and the quality of ultrasonic testing requires the numerical control of the shape of the driving signal applied to the piezoelectric transducer. This allows precise shaping of the ultrasonic field wavelet and corrections for the physical defects of the transducer, which are mainly due to the damper or the lens. It also does away with the need for an accurate electric matching. It then becomes feasible to characterize, a priori, the ultrasonic wavelet by means of temporal and/or spectral specifications and to use, subsequently, an adaptative algorithm to calculate the corresponding driving wavelet. Moreover, the versatility resulting from the numerical control of this wavelet allows it to be changed in real time during a test

  11. Scalets, wavelets and (complex) turning point quantization

    Science.gov (United States)

    Handy, C. R.; Brooks, H. A.

    2001-05-01

    Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.

  12. Effective implementation of wavelet Galerkin method

    Science.gov (United States)

    Finěk, Václav; Šimunková, Martina

    2012-11-01

    It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.

  13. Framelets and wavelets algorithms, analysis, and applications

    CERN Document Server

    Han, Bin

    2017-01-01

    Marking a distinct departure from the perspectives of frame theory and discrete transforms, this book provides a comprehensive mathematical and algorithmic introduction to wavelet theory. As such, it can be used as either a textbook or reference guide. As a textbook for graduate mathematics students and beginning researchers, it offers detailed information on the basic theory of framelets and wavelets, complemented by self-contained elementary proofs, illustrative examples/figures, and supplementary exercises. Further, as an advanced reference guide for experienced researchers and practitioners in mathematics, physics, and engineering, the book addresses in detail a wide range of basic and advanced topics (such as multiwavelets/multiframelets in Sobolev spaces and directional framelets) in wavelet theory, together with systematic mathematical analysis, concrete algorithms, and recent developments in and applications of framelets and wavelets. Lastly, the book can also be used to teach on or study selected spe...

  14. Image Registration Using Redundant Wavelet Transforms

    National Research Council Canada - National Science Library

    Brown, Richard

    2001-01-01

    .... In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency...

  15. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    Science.gov (United States)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  16. Thin film description by wavelet coefficients statistics

    Czech Academy of Sciences Publication Activity Database

    Boldyš, Jiří; Hrach, R.

    2005-01-01

    Roč. 55, č. 1 (2005), s. 55-64 ISSN 0011-4626 Grant - others:GA UK(CZ) 173/2003 Institutional research plan: CEZ:AV0Z10750506 Keywords : thin films * wavelet transform * descriptors * histogram model Subject RIV: BD - Theory of Information Impact factor: 0.360, year: 2005 http://library.utia.cas.cz/separaty/2009/ZOI/boldys-thin film description by wavelet coefficients statistics .pdf

  17. Wavelet and Blend maps for texture synthesis

    OpenAIRE

    Du Jin-Lian; Wang Song; Meng Xianhai

    2011-01-01

    blending is now a popular technology for large realtime texture synthesis .Nevertheless, creating blend map during rendering is time and computation consuming work. In this paper, we exploited a method to create a kind of blend tile which can be tile together seamlessly. Note that blend map is in fact a kind of image, which is Markov Random Field, contains multiresolution signals, while wavelet is a powerful way to process multiresolution signals, we use wavelet to process the traditional ble...

  18. Wavelet Coherence Analysis of Change Blindness

    Directory of Open Access Journals (Sweden)

    Irfan Ali Memon

    2013-01-01

    Full Text Available Change blindness is the incapability of the brain to detect substantial visual changes in the presence of other visual interruption. The objectives of this study are to examine the EEG (Electroencephalographic based changes in functional connectivity of the brain due to the change blindness. The functional connectivity was estimated using the wavelet-based MSC (Magnitude Square Coherence function of ERPs (Event Related Potentials. The ERPs of 30 subjects were used and were recorded using the visual attention experiment in which subjects were instructed to detect changes in visual stimulus presented before them through the computer monitor. The two-way ANOVA statistical test revealed significant increase in both gamma and theta band MSCs, and significant decrease in beta band MSC for change detection trials. These findings imply that change blindness might be associated to the lack of functional connectivity in gamma and theta bands and increase of functional connectivity in beta band. Since gamma, theta, and beta frequency bands reflect different functions of cognitive process such as maintenance, encoding, retrieval, and matching and work load of VSTM (Visual Short Term Memory, the change in functional connectivity might be correlated to these cognitive processes during change blindness.

  19. Wavelet coherence analysis of change blindness

    International Nuclear Information System (INIS)

    Memon, I.; Kalhoro, M.S.

    2013-01-01

    Change blindness is the incapability of the brain to detect substantial visual changes in the presence of other visual interruption. The objectives of this study are to examine the EEG (Electroencephalographic) based changes in functional connectivity of the brain due to the change blindness. The functional connectivity was estimated using the wavelet-based MSC (Magnitude Square Coherence) function of ERPs (Event Related Potentials). The ERPs of 30 subjects were used and were recorded using the visual attention experiment in which subjects were instructed to detect changes in visual stimulus presented before them through the computer monitor. The two-way ANOVA statistical test revealed significant increase in both gamma and theta band MSCs, and significant decrease in beta band MSC for change detection trials. These findings imply that change blindness might be associated to the lack of functional connectivity in gamma and theta bands and increase of functional connectivity in beta band. Since gamma, theta, and beta frequency bands reflect different functions of cognitive process such as maintenance, encoding, retrieval, and matching and work load of VSTM (Visual Short Term Memory), the change in functional connectivity might be correlated to these cognitive processes during change blindness. (author)

  20. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    Science.gov (United States)

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  1. Contrasts between estimates of baseflow help discern multiple sources of water contributing to rivers

    Science.gov (United States)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2014-01-01

    This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. During the early stages of high-discharge events, the chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those based on chemical mass balance using Cl calculated from continuous electrical conductivity measurements. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of the annual discharge with a net baseflow contribution of 16% of total discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of discharge annually with a net baseflow contribution between 2001 and 2011 of 35% of total discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge and 26% of total discharge). These differences most probably reflect how the different techniques characterise baseflow. The local minimum and recursive digital filters probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow, floodplain storage, or interflow) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low

  2. Estimation of marine source-term following Fukushima Dai-ichi accident

    International Nuclear Information System (INIS)

    Bailly du Bois, P.; Laguionie, P.; Boust, D.; Korsakissok, I.; Didier, D.; Fiévet, B.

    2012-01-01

    Contamination of the marine environment following the accident in the Fukushima Dai-ichi nuclear power plant represented the most important artificial radioactive release flux into the sea ever known. The radioactive marine pollution came from atmospheric fallout onto the ocean, direct release of contaminated water from the plant and transport of radioactive pollution from leaching through contaminated soil. In the immediate vicinity of the plant (less than 500 m), the seawater concentrations reached 68 000 Bq.L −1 for 134 Cs and 137 Cs, and exceeded 100 000 Bq.L −1 for 131 I in early April. Due to the accidental context of the releases, it is difficult to estimate the total amount of radionuclides introduced into seawater from data obtained in the plant. An evaluation is proposed here, based on measurements performed in seawater for monitoring purposes. Quantities of 137 Cs in seawater in a 50-km area around the plant were calculated from interpolation of seawater measurements. The environmental halftime of seawater in this area is deduced from the time-evolution of these quantities. This halftime appeared constant at about 7 days for 137 Cs. These data allowed estimation of the amount of principal marine inputs and their evolution in time: a total of 27 PBq (12 PBq–41 PBq) of 137 Cs was estimated up to July 18. Even though this main release may be followed by residual inputs from the plant, river runoff and leakage from deposited sediments, it represents the principal source-term that must be accounted for future studies of the consequences of the accident on marine systems. The 137 Cs from Fukushima will remain detectable for several years throughout the North Pacific, and 137 Cs/ 134 Cs ratio will be a tracer for future studies. Highlights: ► Fukushima Dai-ichi accident is the most important artificial radioactive release flux into the sea. ► Quantities of 137 Cs in seawater are deduced from individual measurements. ► Local concentrations in

  3. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  4. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  5. Multimorbidity in Australia: Comparing estimates derived using administrative data sources and survey data.

    Directory of Open Access Journals (Sweden)

    Sanja Lujic

    Full Text Available Estimating multimorbidity (presence of two or more chronic conditions using administrative data is becoming increasingly common. We investigated (1 the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2 characteristics of people with multimorbidity ascertained using different data sources; and (3 whether the same individuals are classified as multimorbid using different data sources.Baseline survey data for 90,352 participants of the 45 and Up Study-a cohort study of residents of New South Wales, Australia, aged 45 years and over-were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference with claims and hospital data were examined using sensitivity (Sn, positive predictive value (PPV, and kappa (κ.The characteristics of people classified as multimorbid were compared using logistic regression modelling.Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%. The prevalence of multimorbidity was highest using self-report data (37.4%, followed by claims data (36.1% and hospital data (19.3%. Combining all three datasets identified a total of 46 683 (52% people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data.Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement between self-report and administrative

  6. Multimorbidity in Australia: Comparing estimates derived using administrative data sources and survey data.

    Science.gov (United States)

    Lujic, Sanja; Simpson, Judy M; Zwar, Nicholas; Hosseinzadeh, Hassan; Jorm, Louisa

    2017-01-01

    Estimating multimorbidity (presence of two or more chronic conditions) using administrative data is becoming increasingly common. We investigated (1) the concordance of identification of chronic conditions and multimorbidity using self-report survey and administrative datasets; (2) characteristics of people with multimorbidity ascertained using different data sources; and (3) whether the same individuals are classified as multimorbid using different data sources. Baseline survey data for 90,352 participants of the 45 and Up Study-a cohort study of residents of New South Wales, Australia, aged 45 years and over-were linked to prior two-year pharmaceutical claims and hospital admission records. Concordance of eight self-report chronic conditions (reference) with claims and hospital data were examined using sensitivity (Sn), positive predictive value (PPV), and kappa (κ).The characteristics of people classified as multimorbid were compared using logistic regression modelling. Agreement was found to be highest for diabetes in both hospital and claims data (κ = 0.79, 0.78; Sn = 79%, 72%; PPV = 86%, 90%). The prevalence of multimorbidity was highest using self-report data (37.4%), followed by claims data (36.1%) and hospital data (19.3%). Combining all three datasets identified a total of 46 683 (52%) people with multimorbidity, with half of these identified using a single dataset only, and up to 20% identified on all three datasets. Characteristics of persons with and without multimorbidity were generally similar. However, the age gradient was more pronounced and people speaking a language other than English at home were more likely to be identified as multimorbid by administrative data. Different individuals, with different combinations of conditions, are identified as multimorbid when different data sources are used. As such, caution should be applied when ascertaining morbidity from a single data source as the agreement between self-report and administrative data

  7. Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics

    Science.gov (United States)

    Guo, Qiang

    solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.

  8. Quantum computation of multifractal exponents through the quantum wavelet transform

    International Nuclear Information System (INIS)

    Garcia-Mata, Ignacio; Giraud, Olivier; Georgeot, Bertrand

    2009-01-01

    We study the use of the quantum wavelet transform to extract efficiently information about the multifractal exponents for multifractal quantum states. We show that, combined with quantum simulation algorithms, it enables to build quantum algorithms for multifractal exponents with a polynomial gain compared to classical simulations. Numerical results indicate that a rough estimate of fractality could be obtained exponentially fast. Our findings are relevant, e.g., for quantum simulations of multifractal quantum maps and of the Anderson model at the metal-insulator transition.

  9. ORIENTATION FIELD RECONSTRUCTION OF ALTERED FINGERPRINT USING ORTHOGONAL WAVELETS

    Directory of Open Access Journals (Sweden)

    Mini M.G.

    2016-11-01

    Full Text Available Ridge orientation field is an important feature for fingerprint matching and fingerprint reconstruction. Matching of the altered fingerprint against its unaltered mates can be done by extracting the available features in the altered fingerprint and using it along with approximated ridge orientation. This paper presents a method for approximating ridge orientation field of altered fingerprints. In the proposed method, sine and cosine of doubled orientation of the fingerprint is decomposed using orthogonal wavelets and reconstructed back using only the approximation coefficients. No prior information about the singular points is needed for orientation approximation. The method is found suitable for orientation estimation of low quality fingerprint images also.

  10. A New Perceptual Mapping Model Using Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Taha TahaBasheer

    2017-01-01

    Full Text Available Perceptual mappingapproaches have been widely used in visual information processing in multimedia and internet of things (IOT applications. Accumulative Lifting Difference (ALD is proposed in this paper as texture mapping model based on low-complexity lifting wavelet transform, and combined with luminance masking for creating an efficient perceptual mapping model to estimate Just Noticeable Distortion (JND in digital images. In addition to low complexity operations, experiments results show that the proposed modelcan tolerate much more JND noise than models proposed before

  11. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  12. Identifying Patterns in the Weather of Europe for Source Term Estimation

    Science.gov (United States)

    Klampanos, Iraklis; Pappas, Charalambos; Andronopoulos, Spyros; Davvetas, Athanasios; Ikonomopoulos, Andreas; Karkaletsis, Vangelis

    2017-04-01

    During emergencies that involve the release of hazardous substances into the atmosphere the potential health effects on the human population and the environment are of primary concern. Such events have occurred in the past, most notably involving radioactive and toxic substances. Examples of radioactive release events include the Chernobyl accident in 1986, as well as the more recent Fukushima Daiichi accident in 2011. Often, the release of dangerous substances in the atmosphere is detected at locations different from the release origin. The objective of this work is the rapid estimation of such unknown sources shortly after the detection of dangerous substances in the atmosphere, with an initial focus on nuclear or radiological releases. Typically, after the detection of a radioactive substance in the atmosphere indicating the occurrence of an unknown release, the source location is estimated via inverse modelling. However, depending on factors such as the spatial resolution desired, traditional inverse modelling can be computationally time-consuming. This is especially true for cases where complex topography and weather conditions are involved and can therefore be problematic when timing is critical. Making use of machine learning techniques and the Big Data Europe platform1, our approach moves the bulk of the computation before any such event taking place, therefore allowing for rapid initial, albeit rougher, estimations regarding the source location. Our proposed approach is based on the automatic identification of weather patterns within the European continent. Identifying weather patterns has long been an active research field. Our case is differentiated by the fact that it focuses on plume dispersion patterns and these meteorological variables that affect dispersion the most. For a small set of recurrent weather patterns, we simulate hypothetical radioactive releases from a pre-known set of nuclear reactor locations and for different substance and temporal

  13. Coherent multiscale image processing using dual-tree quaternion wavelets.

    Science.gov (United States)

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  14. Multi-model Estimates of Intercontinental Source-Receptor Relationships for Ozone Pollution

    Energy Technology Data Exchange (ETDEWEB)

    Fiore, A M; Dentener, F J; Wild, O; Cuvelier, C; Schultz, M G; Hess, P; Textor, C; Schulz, M; Doherty, R; Horowitz, L W; MacKenzie, I A; Sanderson, M G; Shindell, D T; Stevenson, D S; Szopa, S; Van Dingenen, R; Zeng, G; Atherton, C; Bergmann, D; Bey, I; Carmichael, G; Collins, W J; Duncan, B N; Faluvegi, G; Folberth, G; Gauss, M; Gong, S; Hauglustaine, D; Holloway, T; Isaksen, I A; Jacob, D J; Jonson, J E; Kaminski, J W; Keating, T J; Lupu, A; Marmer, E; Montanaro, V; Park, R; Pitari, G; Pringle, K J; Pyle, J A; Schroeder, S; Vivanco, M G; Wind, P; Wojcik, G; Wu, S; Zuber, A

    2008-10-16

    Understanding the surface O{sub 3} response over a 'receptor' region to emission changes over a foreign 'source' region is key to evaluating the potential gains from an international approach to abate ozone (O{sub 3}) pollution. We apply an ensemble of 21 global and hemispheric chemical transport models to estimate the spatial average surface O{sub 3} response over East Asia (EA), Europe (EU), North America (NA) and South Asia (SA) to 20% decreases in anthropogenic emissions of the O{sub 3} precursors, NO{sub x}, NMVOC, and CO (individually and combined), from each of these regions. We find that the ensemble mean surface O{sub 3} concentrations in the base case (year 2001) simulation matches available observations throughout the year over EU but overestimates them by >10 ppb during summer and early fall over the eastern U.S. and Japan. The sum of the O{sub 3} responses to NO{sub x}, CO, and NMVOC decreases separately is approximately equal to that from a simultaneous reduction of all precursors. We define a continental-scale 'import sensitivity' as the ratio of the O{sub 3} response to the 20% reductions in foreign versus 'domestic' (i.e., over the source region itself) emissions. For example, the combined reduction of emissions from the 3 foreign regions produces an ensemble spatial mean decrease of 0.6 ppb over EU (0.4 ppb from NA), less than the 0.8 ppb from the reduction of EU emissions, leading to an import sensitivity ratio of 0.7. The ensemble mean surface O{sub 3} response to foreign emissions is largest in spring and late fall (0.7-0.9 ppb decrease in all regions from the combined precursor reductions in the 3 foreign regions), with import sensitivities ranging from 0.5 to 1.1 (responses to domestic emission reductions are 0.8-1.6 ppb). High O{sub 3} values are much more sensitive to domestic emissions than to foreign emissions, as indicated by lower import sensitivities of 0.2 to 0.3 during July in EA, EU, and NA

  15. Application of Improved Wavelet Thresholding Function in Image Denoising Processing

    Directory of Open Access Journals (Sweden)

    Hong Qi Zhang

    2014-07-01

    Full Text Available Wavelet analysis is a time – frequency analysis method, time-frequency localization problems are well solved, this paper analyzes the basic principles of the wavelet transform and the relationship between the signal singularity Lipschitz exponent and the local maxima of the wavelet transform coefficients mold, the principles of wavelet transform in image denoising are analyzed, the disadvantages of traditional wavelet thresholding function are studied, wavelet threshold function, the discontinuity of hard threshold and constant deviation of soft threshold are improved, image is denoised through using the improved threshold function.

  16. Wavelet-fractal approach to surface characterization of nanocrystalline ITO thin films

    International Nuclear Information System (INIS)

    Raoufi, Davood; Kalali, Zahra

    2012-01-01

    In this study, indium tin oxide (ITO) thin films were prepared by electron beam deposition method on glass substrates at room temperature (RT). Surface morphology characterization of ITO thin films, before and after annealing at 500 °C, were investigated by analyzing the surface profile of atomic force microscopy (AFM) images using wavelet transform formalism. The wavelet coefficients related to the thin film surface profiles have been calculated, and then roughness exponent (α) of the films has been estimated using the scalegram method. The results reveal that the surface profiles of the films before and after annealing process have self-affine nature.

  17. The Application of Helicopter Rotor Defect Detection Using Wavelet Analysis and Neural Network Technique

    Directory of Open Access Journals (Sweden)

    Jin-Li Sun

    2014-06-01

    Full Text Available When detect the helicopter rotor beam with ultrasonic testing, it is difficult to realize the noise removing and quantitative testing. This paper used the wavelet analysis technique to remove the noise among the ultrasonic detection signal and highlight the signal feature of defect, then drew the curve of defect size and signal amplitude. Based on the relationship of defect size and signal amplitude, a BP neural network was built up and the corresponding estimated value of the simulate defect was obtained by repeating training. It was confirmed that the wavelet analysis and neural network technique met the requirements of practical testing.

  18. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    Energy Technology Data Exchange (ETDEWEB)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R. [GEPI, Observatoire de Paris, CNRS, Université Paris Diderot, 61 Avenue de l' Observatoire, F-75014 Paris (France); Erben, T.; Hildebrandt, H. [Argelander-Institut für Astronomie, University of Bonn, Auf dem Hügel 71, D-53121 Bonn (Germany); Ilbert, O.; Boissier, S.; Boselli, A. [Aix Marseille Université, CNRS, Laboratoire d' Astrophysique de Marseille, UMR 7326, F-13388 Marseille (France); Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J. [Herzberg Institute of Astrophysics, National Research Council of Canada, Victoria, BC V9E 2E7 (Canada); Chen, Y.-T. [Insitute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 106, Taiwan (China); Cuillandre, J.-C. [Canada-France-Hawaïi Telescope Corporation, Kamuela, HI 96743 (United States); Duc, P. A. [Laboratoire AIM Paris-Saclay, CEA/IRFU/SAp, CNRS/INSU, Université Paris Diderot, F-91191 Gif-sur-Yvette Cedex (France); Durrell, P. R. [Department of Physics and Astronomy, Youngstown State University, Youngstown, OH 44555 (United States); Guhathakurta, P. [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Lançon, A., E-mail: anand.raichoor@obspm.fr [Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l' Université, F-67000 Strasbourg (France); and others

    2014-12-20

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg{sup 2} centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i {sub AB} = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z {sub phot} ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ{sub outl.rej.}, and an individual error on z {sub phot} that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z {sub phot} ≲ 0.8 range (–0.05 < Δz < –0.02, σ{sub outl.rej} ∼ 0.06, 10%-15% outliers, and z {sub phot.err.} ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.

  19. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    International Nuclear Information System (INIS)

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R.; Erben, T.; Hildebrandt, H.; Ilbert, O.; Boissier, S.; Boselli, A.; Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J.; Chen, Y.-T.; Cuillandre, J.-C.; Duc, P. A.; Durrell, P. R.; Guhathakurta, P.; Lançon, A.

    2014-01-01

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg 2 centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i AB = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z phot ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ outl.rej. , and an individual error on z phot that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z phot ≲ 0.8 range (–0.05 < Δz < –0.02, σ outl.rej ∼ 0.06, 10%-15% outliers, and z phot.err. ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations

  20. Using NDACC column measurements of carbonyl sulfide to estimate its sources and sinks

    Science.gov (United States)

    Wang, Yuting; Marshall, Julia; Palm, Mathias; Deutscher, Nicholas; Roedenbeck, Christian; Warneke, Thorsten; Notholt, Justus; Baker, Ian; Berry, Joe; Suntharalingam, Parvadha; Jones, Nicholas; Mahieu, Emmanuel; Lejeune, Bernard; Hannigan, James; Conway, Stephanie; Strong, Kimberly; Campbell, Elliott; Wolf, Adam; Kremser, Stefanie

    2016-04-01

    Carbonyl sulfide (OCS) is taken up by plants during photosynthesis through a similar pathway as carbon dioxide (CO2), but is not emitted by respiration, and thus holds great promise as an additional constraint on the carbon cycle. It might act as a sort of tracer of photosynthesis, a way to separate gross primary productivity (GPP) from the net ecosystem exchange (NEE) that is typically derived from flux modeling. However the estimates of OCS sources and sinks still have significant uncertainties, which make it difficult to use OCS as a photosynthetic tracer, and the existing long-term surface-based measurements are sparse. The NDACC-IRWG measures the absorption of OCS in the atmosphere, and provides a potential long-term database of OCS total/partial columns, which can be used to evaluate OCS fluxes. We have retrieved OCS columns from several NDACC sites around the globe, and compared them to model simulation with OCS land fluxes based on the simple biosphere model (SiB). The disagreement between the measurements and the forward simulations indicates that (1) the OCS land fluxes from SiB are too low in the northern boreal region; (2) the ocean fluxes need to be optimized. A statistical linear flux model describing OCS is developed in the TM3 inversion system, and is used to estimate the OCS fluxes. We performed flux inversions using only NOAA OCS surface measurements as an observational constraint and with both surface and NDACC OCS column measurements, and assessed the differences. The posterior uncertainties of the inverted OCS fluxes decreased with the inclusion of NDACC data comparing to those using surface data only, and could be further reduced if more NDACC sites were included.

  1. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Madankan, R. [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Pouget, S. [Department of Geology, University at Buffalo (United States); Singla, P., E-mail: psingla@buffalo.edu [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Bursik, M. [Department of Geology, University at Buffalo (United States); Dehn, J. [Geophysical Institute, University of Alaska, Fairbanks (United States); Jones, M. [Center for Computational Research, University at Buffalo (United States); Patra, A. [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Pavolonis, M. [NOAA-NESDIS, Center for Satellite Applications and Research (United States); Pitman, E.B. [Department of Mathematics, University at Buffalo (United States); Singh, T. [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Webley, P. [Geophysical Institute, University of Alaska, Fairbanks (United States)

    2014-08-15

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This paper presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.

  2. Source term estimation for small sized HTRs: status and further needs - a german approach

    International Nuclear Information System (INIS)

    Moormann, R.; Schenk, W.; Verfondern, K.

    2000-01-01

    The main results of German studies on source term estimation for small pebble-bed HTRs with their strict safety demands are outlined. Core heat-up events are no longer dominant for modern high quality fuel, but fission product transport during water ingress accidents (steam cycle plants) and depressurization is relevant, mainly due to remobilization of fission products which were plated-out in the course of normal operation or became dust borne. An important lack of knowledge was identified as concerns data on plate-out under normal operation, as well as on the behaviour of dust borne activity as a whole. Improved knowledge in this field is also important for maintenance/repair and design/shielding. For core heat-up events the influence of burn-up on temperature induced fission product release has to be measured for future high burn-up fuel. Also, transport mechanisms out of the He circuit into the environment require further examination. For water/steam ingress events mobilization of plated-out fission products by steam or water has to be considered in detail, along with steam interaction with kernels of particles with defective coatings. For source terms of depressurization, a more detailed knowledge of the flow pattern and shear forces on the various surfaces is necessary. In order to improve the knowledge on plate-out and dust in normal operation and to generate specimens for experimental remobilization studies, planning/design of plate-out/dust examination facilities which could be added to the next generation of HTRs (HTR10,HTTR) is proposed. For severe air ingress and reactivity accidents, behaviour of future advanced fuel elements has to be experimentally tested. (authors)

  3. Frequentist and Bayesian inference for Gaussian-log-Gaussian wavelet trees and statistical signal processing applications

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Møller, Jesper

    2017-01-01

    We introduce new estimation methods for a subclass of the Gaussian scale mixture models for wavelet trees by Wainwright, Simoncelli and Willsky that rely on modern results for composite likelihoods and approximate Bayesian inference. Our methodology is illustrated for denoising and edge detection...

  4. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    Science.gov (United States)

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  5. The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna

    NARCIS (Netherlands)

    Weimar Acerbi, F.; Clevers, J.G.P.W.; Schaepman, M.E.

    2006-01-01

    Multi-sensor image fusion using the wavelet approach provides a conceptual framework for the improvement of the spatial resolution with minimal distortion of the spectral content of the source image. This paper assesses whether images with a large ratio of spatial resolution can be fused, and

  6. Estimation of Source Term Behaviors in SBO Sequence in a Typical 1000MWth PWR and Comparison with Other Source Term Results

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Han, Seok Jung; Ahn, Kwang Il; Fynan, Douglas; Jung, Yong Hoon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Since the Three Mile Island (TMI) (1979), Chernobyl (1986), Fukushima Daiichi (2011) accidents, the assessment of radiological source term effects on the environment has been a key concern of nuclear safety. In the Fukushima Daiichi accident, the long-term SBO (station blackout) accident occurs. Using the worst case assumptions like in Fukushima accident on the accident sequences and on the availability of safety systems, the thermal hydraulic behaviors, core relocation and environmental source terms behaviors are estimated for long-term SBO accident for OPR-1000 reactor. MELCOR code version 1.8.6 is used in this analysis. Source term results estimated in this study is compared with other previous studies and estimated results in Fukushima accidents in UNSCEAR-2013 report. This study estimated that 11 % of iodine can be released to environment and 2% of cesium can be released to environment. UNSCEAR-2013 report estimated that 2 - 8 % of iodine have been released to environment and 1 - 3 % of cesium have been released to the environment. They have similar results in the aspect of release fractions of iodine and cesium to environment.

  7. Application of the wavelet image analysis technique to monitor cell concentration in bioprocesses

    Directory of Open Access Journals (Sweden)

    G. J. R. Garófano

    2005-12-01

    Full Text Available The growth of cells of great practical interest, such as, the filamentous cells of bacterium Streptomyces clavuligerus, the yeast Saccharomyces cerevisiae and the insect Spodoptera frugiperda (Sf9 cell, cultivated in shaking flasks with complex media at appropriate temperatures and pHs, was quantified by the new wavelet transform technique. This image analysis tool was implemented using Matlab 5.2 software to process digital images acquired of samples taken of these three types of cells throughoot their cultivation. The values of the average wavelet coefficients (AWCs of simplified images were compared with experimental measurements of cell concentration and with computer-based densitometric measurements. AWCs were shown to be directly proportional to measurements of cell concentration and to densitometric measurements, making evident the great potential of the wavelet transform technique to quantitatively estimate the growth of several types of cells.

  8. Wavelet decomposition and neuro-fuzzy hybrid system applied to short-term wind power

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Jimenez, L.A.; Mendoza-Villena, M. [La Rioja Univ., Logrono (Spain). Dept. of Electrical Engineering; Ramirez-Rosado, I.J.; Abebe, B. [Zaragoza Univ., Zaragoza (Spain). Dept. of Electrical Engineering

    2010-03-09

    Wind energy has become increasingly popular as a renewable energy source. However, the integration of wind farms in the electrical power systems presents several problems, including the chaotic fluctuation of wind flow which results in highly varied power generation from a wind farm. An accurate forecast of wind power generation has important consequences in the economic operation of the integrated power system. This paper presented a new statistical short-term wind power forecasting model based on wavelet decomposition and neuro-fuzzy systems optimized with a genetic algorithm. The paper discussed wavelet decomposition; the proposed wind power forecasting model; and computer results. The original time series, the mean electric power generated in a wind farm, was decomposing into wavelet coefficients that were utilized as inputs for the forecasting model. The forecasting results obtained with the final models were compared to those obtained with traditional forecasting models showing a better performance for all the forecasting horizons. 13 refs., 1 tab., 4 figs.

  9. Experimental sources of variation in avian energetics: estimated basal metabolic rate decreases with successive measurements.

    Science.gov (United States)

    Jacobs, Paul J; McKechnie, Andrew E

    2014-01-01

    Basal metabolic rate (BMR) is one of the most widely used metabolic variables in endotherm ecological and evolutionary physiology. Surprisingly few studies have investigated how BMR is influenced by experimental and analytical variables over and above the standardized conditions required for minimum normothermic resting metabolism. We tested whether avian BMR is affected by habituation to the conditions experienced during laboratory gas exchange measurements by measuring BMR five times in succession in budgerigars (Melopsittacus undulatus) housed under constant temperature and photoperiod. Both the magnitude and the variability of BMR decreased significantly with repeated measurements, from 0.410 ± 0.092 W (n = 9) during the first measurement to 0.285 ± 0.042 W (n = 9) during the fifth measurement. Thus, estimated BMR decreased by ∼30% within individuals solely on account of the number of times they had previously experienced the experimental conditions. The most likely explanation for these results is an attenuation with repeated exposure of the acute stress response induced by birds being handled and placed in respirometry chambers. Our data suggest that habituation to experimental conditions is potentially an important determinant of observed BMR, and this source of variation needs to be taken into account in future studies of metabolic variation among individuals, populations, and species.

  10. Estimate on external effective doses received by the Iranian population from environmental gamma radiation sources

    Energy Technology Data Exchange (ETDEWEB)

    Roozitalab, J.; Reza deevband, M.; Rastkhah, N. [National Radiation Protection Dept. Atomic Energy Organization (Iran, Islamic Republic of); Sohrabi, M. [Intenatinal atomic Energy Agency, Vienna (Austria)

    2006-07-01

    Concentration of natural radioactive materials, especially available U 238, Ra 226, Th 232, and K 40 in construction materials and soil, as well as absorb dose from cosmic rays, is the most important source of the people for effective doses from the environment radiation. In order to evaluate external effective dose, it has been carried out more than 1000 measurements in 36 cities by sensitive dosimeters to environmental gamma radiation for indoor and outdoor conditions in residential areas; which its results show that range of gamma exposure for inside of buildings in Iran is 8.7-20.5 {mu}R/h, and outdoor environments of different cities is 7.9-20.6 {mu}R/h, which their mean value are 14.33 and 12.62 {mu}R/h respectively. Meanwhile, it has been estimated that beam-absorbing ratio between indoor and outdoor in measured environments is 1.55, except contribution of cosmic rays. This studies show that average effective dose for each Iranian person from environmental gamma is 96.9 n Sv/h, and annually effective dose for every person is 0.848 mSv. (authors)

  11. Estimate on external effective doses received by the Iranian population from environmental gamma radiation sources

    International Nuclear Information System (INIS)

    Roozitalab, J.; Reza deevband, M.; Rastkhah, N.; Sohrabi, M.

    2006-01-01

    Concentration of natural radioactive materials, especially available U 238, Ra 226, Th 232, and K 40 in construction materials and soil, as well as absorb dose from cosmic rays, is the most important source of the people for effective doses from the environment radiation. In order to evaluate external effective dose, it has been carried out more than 1000 measurements in 36 cities by sensitive dosimeters to environmental gamma radiation for indoor and outdoor conditions in residential areas; which its results show that range of gamma exposure for inside of buildings in Iran is 8.7-20.5 μR/h, and outdoor environments of different cities is 7.9-20.6 μR/h, which their mean value are 14.33 and 12.62 μR/h respectively. Meanwhile, it has been estimated that beam-absorbing ratio between indoor and outdoor in measured environments is 1.55, except contribution of cosmic rays. This studies show that average effective dose for each Iranian person from environmental gamma is 96.9 n Sv/h, and annually effective dose for every person is 0.848 mSv. (authors)

  12. Detecting microcalcifications in digital mammogram using wavelets

    International Nuclear Information System (INIS)

    Yang Jucheng; Park Dongsun

    2004-01-01

    Breast cancer is still one of main mortality causes in women, but the early detection can increase the chance of cure. Microcalcifications are small size structures, which can indicate the presence of cancer since they are often associated to the most different types of breast tumors. However, they very small size and the X-ray systems limitations lead to constraints to the adequate visualization of such structures, which means that the microcalcifications can be missed many times in mammogram visual examination. In addition, the human eyes are not able to distinguish minimal tonality differences, which can be another constraint when mammogram image presents poor contrast between microcalcifications and the tissues around them. Computer-aided diagnosis (CAD) schemes are being developed in order to increase the probabilities of early detection. To enhance and detect the microcalcifications in the mammograms we use the wavelets transform. From a signal processing point of view, microcalcifications are high frequency components in mammograms. Due to the multi-resolution decomposition capacity of the wavelet transform, we can decompose the image into different resolution levels which sensitive to different frequency bands. By choosing an appropriate wavelet and a right resolution level, we can effectively enhance and detect the microcalcifications in digital mammogram. In this work, we describe a new four-step method for the detection of microcalcifications: segmentation, wavelets transform processing, labeling and post-processing. The segmentation step is to split the breast area into 256x256 segments. For each segmented sub-image, wavelet transform is operated on it. For comparing study wavelet transform method, 4 typical family wavelets and 4 decomposing levels is discussed. We choose four family wavelets for detecting microcalcifications, that is, Daubechies, Biothgonai, Coieflets and Symlets wavelets, for simply, bd4, bior3.7, coif3, sym2 are chosen as the

  13. Multiresolution wavelet-ANN model for significant wave height forecasting.

    Digital Repository Service at National Institute of Oceanography (India)

    Deka, P.C.; Mandal, S.; Prahlada, R.

    Hybrid wavelet artificial neural network (WLNN) has been applied in the present study to forecast significant wave heights (Hs). Here Discrete Wavelet Transformation is used to preprocess the time series data (Hs) prior to Artificial Neural Network...

  14. A New Formula for the Inverse Wavelet Transform

    OpenAIRE

    Sun, Wenchang

    2010-01-01

    Finding a computationally efficient algorithm for the inverse continuous wavelet transform is a fundamental topic in applications. In this paper, we show the convergence of the inverse wavelet transform.

  15. Wavelet transforms as solutions of partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Zweig, G.

    1997-10-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.

  16. Top-down estimate of a large source of atmospheric carbon monoxide associated with fuel combustion in Asia

    Energy Technology Data Exchange (ETDEWEB)

    Kasibhatla, P.; Arellano, A.; Logan, J.A.; Palmer, P.I.; Novelli, P. [Duke University, Durham, NC (United States). Nicholas School of Environmental & Earth Science

    2002-10-01

    Deriving robust regional estimates of the sources of chemically and radiatively important gases and aerosols to the atmosphere is challenging. Using an inverse modeling methodology, it was found that the source of carbon monoxide from fossil-fuel and biofuel combustion in Asia during 1994 was 350-380 Tg yr{sup -1}, which is 110-140 Tg yr{sup -1} higher than bottom-up estimates derived using traditional inventory-based approaches. This discrepancy points to an important gap in our understanding of the human impact on atmospheric chemical composition.

  17. Simultaneous estimation of strength and position of a heat source in a participating medium using DE algorithm

    International Nuclear Information System (INIS)

    Parwani, Ajit K.; Talukdar, Prabal; Subbarao, P.M.V.

    2013-01-01

    An inverse heat transfer problem is discussed to estimate simultaneously the unknown position and timewise varying strength of a heat source by utilizing differential evolution approach. A two dimensional enclosure with isothermal and black boundaries containing non-scattering, absorbing and emitting gray medium is considered. Both radiation and conduction heat transfer are included. No prior information is used for the functional form of timewise varying strength of heat source. The finite volume method is used to solve the radiative transfer equation and the energy equation. In this work, instead of measured data, some temperature data required in the solution of the inverse problem are taken from the solution of the direct problem. The effect of measurement errors on the accuracy of estimation is examined by introducing errors in the temperature data of the direct problem. The prediction of source strength and its position by the differential evolution (DE) algorithm is found to be quite reasonable. -- Highlights: •Simultaneous estimation of strength and position of a heat source. •A conducting and radiatively participating medium is considered. •Implementation of differential evolution algorithm for such kind of problems. •Profiles with discontinuities can be estimated accurately. •No limitation in the determination of source strength at the final time

  18. HYDRAULIC ELEVATOR INSTALLATION ESTIMATION FOR THE WATER SOURCE WELL SAND-PACK CLEANING UP

    Directory of Open Access Journals (Sweden)

    V. V. Ivashechkin

    2016-01-01

    Full Text Available The article offers design of a hydraulic elevator installation for cleaning up water-source wells of sand packs. It considerers the installation hydraulic circuit according to which the normal pump feeds the high-level tank water into the borehole through two parallel water lines. The water-jet line with washing nozzle for destroying the sand-pack and the supply pipe-line coupled with the operational nozzle of the hydraulic elevator containing the inlet and the supply pipelines for respectively intaking the hydromixture and removing it from the well. The paper adduces equations for fluid motion in the supply and the water-jet pipelines and offers expressions for evaluating the required heads in them. For determining water flow in the supply and the water-jet pipe lines the author proposes to employ graphical approach allowing finding the regime point in Q–H chart by means of building characteristics of the pump and the pipe-lines. For calculating the useful vertical head, supply and dimensions of the hydraulic elevator the article employs the equation of motion quantity with consistency admission of the motion quantity before and after mixing the flows in the hydraulic elevator. The suggested correlations for evaluating the hydraulic elevator efficiency determine the sand pack removal duration as function of its sizes and the ejected fluid flow rate. A hydraulic-elevator installation parameters estimation example illustrates removing a sand pack from a water-source borehole of 41 m deep and 150 mm diameter bored in the village of Uzla of Myadelsk region, of Minsk oblast. The working efficiency of a manufactured and laboratory tested engineering prototype of the hydraulic elevator installation was acknowledged in actual tests at the indicated borehole site. With application of graphical approach, the suggested for the hydraulic elevator installation parameters calculation procedure allows selecting, with given depth and the borehole diameter

  19. Full traveltime inversion in source domain

    KAUST Repository

    Liu, Lu

    2017-06-01

    This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity that can kinetically best match the reconstructed plane-wave source of early arrivals with true source in source domain. It does not require picking first arrivals for tomography, which is one of the most challenging aspects of ray-based tomographic inversion. Besides, this method does not need estimate the source wavelet, which is a necessity for receiver-domain wave-equation velocity inversion. Furthermore, we applied our method on one synthetic dataset; the results show our method could generate a reasonable background velocity even when shingling first arrivals exist and could provide a good initial velocity for the conventional full waveform inversion (FWI).

  20. Wavelet processing techniques for digital mammography

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu

    1992-09-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  1. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating

  2. Estimation of Symptom Severity Scores for Patients with Schizophrenia Using ERP Source Activations during a Facial Affect Discrimination Task.

    Science.gov (United States)

    Kim, Do-Won; Lee, Seung-Hwan; Shim, Miseon; Im, Chang-Hwan

    2017-01-01

    Precise diagnosis of psychiatric diseases and a comprehensive assessment of a patient's symptom severity are important in order to establish a successful treatment strategy for each patient. Although great efforts have been devoted to searching for diagnostic biomarkers of schizophrenia over the past several decades, no study has yet investigated how accurately these biomarkers are able to estimate an individual patient's symptom severity. In this study, we applied electrophysiological biomarkers obtained from electroencephalography (EEG) analyses to an estimation of symptom severity scores of patients with schizophrenia. EEG signals were recorded from 23 patients while they performed a facial affect discrimination task. Based on the source current density analysis results, we extracted voxels that showed a strong correlation between source activity and symptom scores. We then built a prediction model to estimate the symptom severity scores of each patient using the source activations of the selected voxels. The symptom scores of the Positive and Negative Syndrome Scale (PANSS) were estimated using the linear prediction model. The results of leave-one-out cross validation (LOOCV) showed that the mean errors of the estimated symptom scores were 3.34 ± 2.40 and 3.90 ± 3.01 for the Positive and Negative PANSS scores, respectively. The current pilot study is the first attempt to estimate symptom severity scores in schizophrenia using quantitative EEG features. It is expected that the present method can be extended to other cognitive paradigms or other psychological illnesses.

  3. Estimation of Symptom Severity Scores for Patients with Schizophrenia Using ERP Source Activations during a Facial Affect Discrimination Task

    Directory of Open Access Journals (Sweden)

    Do-Won Kim

    2017-08-01

    Full Text Available Precise diagnosis of psychiatric diseases and a comprehensive assessment of a patient's symptom severity are important in order to establish a successful treatment strategy for each patient. Although great efforts have been devoted to searching for diagnostic biomarkers of schizophrenia over the past several decades, no study has yet investigated how accurately these biomarkers are able to estimate an individual patient's symptom severity. In this study, we applied electrophysiological biomarkers obtained from electroencephalography (EEG analyses to an estimation of symptom severity scores of patients with schizophrenia. EEG signals were recorded from 23 patients while they performed a facial affect discrimination task. Based on the source current density analysis results, we extracted voxels that showed a strong correlation between source activity and symptom scores. We then built a prediction model to estimate the symptom severity scores of each patient using the source activations of the selected voxels. The symptom scores of the Positive and Negative Syndrome Scale (PANSS were estimated using the linear prediction model. The results of leave-one-out cross validation (LOOCV showed that the mean errors of the estimated symptom scores were 3.34 ± 2.40 and 3.90 ± 3.01 for the Positive and Negative PANSS scores, respectively. The current pilot study is the first attempt to estimate symptom severity scores in schizophrenia using quantitative EEG features. It is expected that the present method can be extended to other cognitive paradigms or other psychological illnesses.

  4. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  5. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  6. Construction of a class of Daubechies type wavelet bases

    International Nuclear Information System (INIS)

    Li Dengfeng; Wu Guochang

    2009-01-01

    Extensive work has been done in the theory and the construction of compactly supported orthonormal wavelet bases of L 2 (R). Some of the most distinguished work was done by Daubechies, who constructed a whole family of such wavelet bases. In this paper, we construct a class of orthonormal wavelet bases by using the principle of Daubechies, and investigate the length of support and the regularity of these wavelet bases.

  7. Estimation of dietary flavonoid intake and major food sources of Korean adults.

    Science.gov (United States)

    Jun, Shinyoung; Shin, Sangah; Joung, Hyojee

    2016-02-14

    Epidemiological studies have suggested that flavonoids exhibit preventive effects on degenerative diseases. However, lack of sufficient data on flavonoid intake has limited evaluating the proposed effects in populations. Therefore, we aimed to estimate the total and individual flavonoid intakes among Korean adults and determine the major dietary sources of these flavonoids. We constructed a flavonoid database of common Korean foods, based on the food list reported in the 24-h recall of the Korea National Health and Nutrition Examination Survey (KNHANES) 2007-2012, using data from the Korea Functional Food Composition Table, US Department of Agriculture flavonoid database, Phenol-Explorer database and other analytical studies. This database, which covers 49 % of food items and 76 % of food intake, was linked with the 24-h recall data of 33 581 subjects aged ≥19 years in the KNHANES 2007-2012. The mean daily intake of total flavonoids in Korean adults was 318·0 mg/d, from proanthocyanidins (22·3%), flavonols (20·3%), isoflavones (18·1%), flavan-3-ols (16·2%), anthocyanidins (11·6%), flavanones (11·3%) and flavones (0·3%). The major contributing food groups to the flavonoid intake were fruits (54·4%), vegetables (20·5%), legumes and legume products (16·2%) and beverages and alcohols (3·1%), and the major contributing food items were apples (21·9%), mandarins (12·5%), tofu (11·5%), onions (9·6%) and grapes (9·0%). In the regression analysis, the consumption of legumes and legume products, vegetables and fruits predicted total flavonoid intake the most. The findings of this study could facilitate further investigation on the health benefits of flavonoids and provide the basic information for establishing recommended flavonoid intakes for Koreans.

  8. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability.

    Science.gov (United States)

    Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine; Lichtenstein, Alice H

    2016-10-01

    The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18-85 y), and body mass index [BMI (in kg/m 2 ): 20-35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status significantly contributes to the variability in GI value

  9. Estimating and correcting the amplitude radiation pattern of a virtual source

    NARCIS (Netherlands)

    Van der Neut, J.; Bakulin, A.

    2009-01-01

    In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface

  10. Use of a Bayesian isotope mixing model to estimate proportional contributions of multiple nitrate sources in surface water

    International Nuclear Information System (INIS)

    Xue Dongmei; De Baets, Bernard; Van Cleemput, Oswald; Hennessy, Carmel; Berglund, Michael; Boeckx, Pascal

    2012-01-01

    To identify different NO 3 − sources in surface water and to estimate their proportional contribution to the nitrate mixture in surface water, a dual isotope and a Bayesian isotope mixing model have been applied for six different surface waters affected by agriculture, greenhouses in an agricultural area, and households. Annual mean δ 15 N–NO 3 − were between 8.0 and 19.4‰, while annual mean δ 18 O–NO 3 − were given by 4.5–30.7‰. SIAR was used to estimate the proportional contribution of five potential NO 3 − sources (NO 3 − in precipitation, NO 3 − fertilizer, NH 4 + in fertilizer and rain, soil N, and manure and sewage). SIAR showed that “manure and sewage” contributed highest, “soil N”, “NO 3 − fertilizer” and “NH 4 + in fertilizer and rain” contributed middle, and “NO 3 − in precipitation” contributed least. The SIAR output can be considered as a “fingerprint” for the NO 3 − source contributions. However, the wide range of isotope values observed in surface water and of the NO 3 − sources limit its applicability. - Highlights: ► The dual isotope approach (δ 15 N- and δ 18 O–NO 3 − ) identify dominant nitrate sources in 6 surface waters. ► The SIAR model estimate proportional contributions for 5 nitrate sources. ► SIAR is a reliable approach to assess temporal and spatial variations of different NO 3 − sources. ► The wide range of isotope values observed in surface water and of the nitrate sources limit its applicability. - This paper successfully applied a dual isotope approach and Bayesian isotopic mixing model to identify and quantify 5 potential nitrate sources in surface water.

  11. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    Science.gov (United States)

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  12. A Comparative Study on Optimal Structural Dynamics Using Wavelet Functions

    Directory of Open Access Journals (Sweden)

    Seyed Hossein Mahdavi

    2015-01-01

    Full Text Available Wavelet solution techniques have become the focus of interest among researchers in different disciplines of science and technology. In this paper, implementation of two different wavelet basis functions has been comparatively considered for dynamic analysis of structures. For this aim, computational technique is developed by using free scale of simple Haar wavelet, initially. Later, complex and continuous Chebyshev wavelet basis functions are presented to improve the time history analysis of structures. Free-scaled Chebyshev coefficient matrix and operation of integration are derived to directly approximate displacements of the corresponding system. In addition, stability of responses has been investigated for the proposed algorithm of discrete Haar wavelet compared against continuous Chebyshev wavelet. To demonstrate the validity of the wavelet-based algorithms, aforesaid schemes have been extended to the linear and nonlinear structural dynamics. The effectiveness of free-scaled Chebyshev wavelet has been compared with simple Haar wavelet and two common integration methods. It is deduced that either indirect method proposed for discrete Haar wavelet or direct approach for continuous Chebyshev wavelet is unconditionally stable. Finally, it is concluded that numerical solution is highly benefited by the least computation time involved and high accuracy of response, particularly using low scale of complex Chebyshev wavelet.

  13. On extensions of wavelet systems to dual pairs of frames

    DEFF Research Database (Denmark)

    Christensen, Ole; Kim, Hong Oh; Kim, Rae Young

    2015-01-01

    It is an open problem whether any pair of Bessel sequences with wavelet structure can be extended to a pair of dual frames by adding a pair of singly generated wavelet systems. We consider the particular case where the given wavelet systems are generated by the multiscale setup with trigonometric...

  14. Fast generation of computer-generated holograms using wavelet shrinkage.

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  15. Image encryption using the fractional wavelet transform

    International Nuclear Information System (INIS)

    Vilardy, Juan M; Useche, J; Torres, C O; Mattos, L

    2011-01-01

    In this paper a technique for the coding of digital images is developed using Fractional Wavelet Transform (FWT) and random phase masks (RPMs). The digital image to encrypt is transformed with the FWT, after the coefficients resulting from the FWT (Approximation, Details: Horizontal, vertical and diagonal) are multiplied each one by different RPMs (statistically independent) and these latest results is applied an Inverse Wavelet Transform (IWT), obtaining the encrypted digital image. The decryption technique is the same encryption technique in reverse sense. This technique provides immediate advantages security compared to conventional techniques, in this technique the mother wavelet family and fractional orders associated with the FWT are additional keys that make access difficult to information to an unauthorized person (besides the RPMs used), thereby the level of encryption security is extraordinarily increased. In this work the mathematical support for the use of the FWT in the computational algorithm for the encryption is also developed.

  16. Partially coherent imaging and spatial coherence wavelets

    International Nuclear Information System (INIS)

    Castaneda, Roman

    2003-03-01

    A description of spatially partially coherent imaging based on the propagation of second order spatial coherence wavelets and marginal power spectra (Wigner distribution functions) is presented. In this dynamics, the spatial coherence wavelets will be affected by the system through its elementary transfer function. The consistency of the model with the both extreme cases of full coherent and incoherent imaging was proved. In the last case we obtained the classical concept of optical transfer function as a simple integral of the elementary transfer function. Furthermore, the elementary incoherent response function was introduced as the Fourier transform of the elementary transfer function. It describes the propagation of spatial coherence wavelets form each object point to each image point through a specific point on the pupil planes. The point spread function of the system was obtained by a simple integral of the elementary incoherent response function. (author)

  17. Motion compensation via redundant-wavelet multihypothesis.

    Science.gov (United States)

    Fowler, James E; Cui, Suxia; Wang, Yonghui

    2006-10-01

    Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.

  18. ECG denoising with adaptive bionic wavelet transform.

    Science.gov (United States)

    Sayadi, Omid; Shamsollahi, Mohammad Bagher

    2006-01-01

    In this paper a new ECG denoising scheme is proposed using a novel adaptive wavelet transform, named bionic wavelet transform (BWT), which had been first developed based on a model of the active auditory system. There has been some outstanding features with the BWT such as nonlinearity, high sensitivity and frequency selectivity, concentrated energy distribution and its ability to reconstruct signal via inverse transform but the most distinguishing characteristic of BWT is that its resolution in the time-frequency domain can be adaptively adjusted not only by the signal frequency but also by the signal instantaneous amplitude and its first-order differential. Besides by optimizing the BWT parameters parallel to modifying a new threshold value, one can handle ECG denoising with results comparing to those of wavelet transform (WT). Preliminary tests of BWT application to ECG denoising were constructed on the signals of MIT-BIH database which showed high performance of noise reduction.

  19. Using radiometric surface temperature for surface energy flux estimation in Mediterranean drylands from a two-source perspective

    DEFF Research Database (Denmark)

    Morillas, L.; Garcia Garcia, Monica; Nieto Solana, Hector

    2013-01-01

    A two-source model (TSM) for surface energy balance, considering explicitly soil and vegetation components, was tested under water stress conditions. The TSM evaluated estimates the sensible heat flux (H) using the surface-air thermal gradient and the latent heat flux (LE) as a residual from the ...

  20. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  1. Reactor condition monitoring and singularity detection via wavelet and use of entropy in Monte Carlo calculation

    International Nuclear Information System (INIS)

    Kim, Ok Joo

    2007-02-01

    Wavelet theory was applied to detect the singularity in reactor power signal. Compared to Fourier transform, wavelet transform has localization properties in space and frequency. Therefore, by wavelet transform after de-noising, singular points can be found easily. To demonstrate this, we generated reactor power signals using a HANARO (a Korean multi-purpose research reactor) dynamics model consisting of 39 nonlinear differential equations and Gaussian noise. We applied wavelet transform decomposition and de-noising procedures to these signals. It was effective to detect the singular events such as sudden reactivity change and abrupt intrinsic property changes. Thus this method could be profitably utilized in a real-time system for automatic event recognition (e.g., reactor condition monitoring). In addition, using the wavelet de-noising concept, variance reduction of Monte Carlo result was tried. To get correct solution in Monte Carlo calculation, small uncertainty is required and it is quite time-consuming on a computer. Instead of long-time calculation in the Monte Carlo code (MCNP), wavelet de-noising can be performed to get small uncertainties. We applied this idea to MCNP results of k eff and fission source. Variance was reduced somewhat while the average value is kept constant. In MCNP criticality calculation, initial guess for the fission distribution is used and it could give contamination to solution. To avoid this situation, sufficient number of initial generations should be discarded, and they are called inactive cycles. Convergence check can give guildeline to determine when we should start the active cycles. Various entropy functions are tried to check the convergence of fission distribution. Some entropy functions reflect the convergence behavior of fission distribution well. Entropy could be a powerful method to determine inactive/active cycles in MCNP calculation

  2. Orthonormal Wavelet Bases for Quantum Molecular Dynamics

    International Nuclear Information System (INIS)

    Tymczak, C.; Wang, X.

    1997-01-01

    We report on the use of compactly supported, orthonormal wavelet bases for quantum molecular-dynamics (Car-Parrinello) algorithms. A wavelet selection scheme is developed and tested for prototypical problems, such as the three-dimensional harmonic oscillator, the hydrogen atom, and the local density approximation to atomic and molecular systems. Our method shows systematic convergence with increased grid size, along with improvement on compression rates, thereby yielding an optimal grid for self-consistent electronic structure calculations. copyright 1997 The American Physical Society

  3. Wavelet methods in mathematical analysis and engineering

    CERN Document Server

    Damlamian, Alain

    2010-01-01

    This book gives a comprehensive overview of both the fundamentals of wavelet analysis and related tools, and of the most active recent developments towards applications. It offers a stateoftheart in several active areas of research where wavelet ideas, or more generally multiresolution ideas have proved particularly effective. The main applications covered are in the numerical analysis of PDEs, and signal and image processing. Recently introduced techniques such as Empirical Mode Decomposition (EMD) and new trends in the recovery of missing data, such as compressed sensing, are also presented.

  4. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N; Haddad, Paul R

    2001-01-01

    The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course

  5. Estimated Intakes and Sources of Total and Added Sugars in the Canadian Diet

    OpenAIRE

    Brisbois, Tristin D.; Marsden, Sandra L.; Anderson, G. Harvey; Sievenpiper, John L.

    2014-01-01

    National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS) nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data a...

  6. A New Method for Multisensor Data Fusion Based on Wavelet Transform in a Chemical Plant

    Directory of Open Access Journals (Sweden)

    Karim Salahshoor

    2014-07-01

    Full Text Available This paper presents a new multi-sensor data fusion method based on the combination of wavelet transform (WT and extended Kalman filter (EKF. Input data are first filtered by a wavelet transform via Daubechies wavelet “db4” functions and the filtered data are then fused based on variance weights in terms of minimum mean square error. The fused data are finally treated by extended Kalman filter for the final state estimation. The recent data are recursively utilized to apply wavelet transform and extract the variance of the updated data, which makes it suitable to be applied to both static and dynamic systems corrupted by noisy environments. The method has suitable performance in state estimation in comparison with the other alternative algorithms. A three-tank benchmark system has been adopted to comparatively demonstrate the performance merits of the method compared to a known algorithm in terms of efficiently satisfying signal-tonoise (SNR and minimum square error (MSE criteria.

  7. Earthquake source parameter and focal mechanism estimates for the Western Quebec Seismic Zone in eastern Canada

    Science.gov (United States)

    Rodriguez Padilla, A. M.; Onwuemeka, J.; Liu, Y.; Harrington, R. M.

    2017-12-01

    The Western Quebec Seismic Zone (WQSZ) is a 160-km-wide band of intraplate seismicity extending 500 km from the Adirondack Highlands (United States) to the Laurentian uplands (Canada). Historically, the WQSZ has experienced over fifteen earthquakes above magnitude 5, with the noteworthy MN5.2 Ladysmith event on May 17, 2013. Previous studies have associated seismicity in the area to the reactivation of Early Paleozoic normal faults within a failed Iapetan rift arm, or strength contrasts between mafic intrusions and felsic rocks due to the Mesozoic track of the Great Meteor hotspot. A good understanding of seismicity and its relation to pre-existing structures requires information about event source properties, such as static stress drop and fault plane orientation, which can be constrained via spectral analysis and focal mechanism solutions. Using data recorded by the CNSN and USArray Transportable Array, we first characterize b-value for 709 events between 2012 and 2016 in WQSZ, obtaining a value of 0.75. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for 35 events MN 2.7+. We select event pairs with highly similar waveforms, proximal hypocenters, and magnitudes differing by 1-2 units. Our preliminary results using single-station spectra show corner frequencies of 15 to 40 Hz and stress drop values between 7 and 130 MPa, typical of intraplate seismicity. Last, we solve focal mechanism solutions of 35 events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. Our preliminary results suggest predominantly thrust faulting mechanisms, and at times oblique thrust faulting. The P-axis trend of the focal mechanism solutions suggests a principal stress orientation of NE-SW, which is consistent with that derived from focal mechanisms of earthquakes prior to 2013. We plan to fit the event pair spectral ratios to correct for attenuation

  8. A simplified approach to estimating reference source terms for LWR designs

    International Nuclear Information System (INIS)

    1999-12-01

    systems. The publication of this IAEA technical document represents the conclusion of a task, initiated in 1996, devoted to the estimation of the radioactive source term in nuclear reactors. It focuses mainly on light water reactors (LWRs)

  9. Estimating the number of sources in a noisy convolutive mixture using BIC

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics of the s......The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics...

  10. Estimating and correcting the amplitude radiation pattern of a virtual source

    OpenAIRE

    Van der Neut, J.; Bakulin, A.

    2009-01-01

    In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface and recorded by an array of receivers in a borehole. The quality of the VS data depends on the radiation pattern of the virtual source, which in turn is controlled by the spatial aperture of the sur...

  11. Examining effective use of data sources and modeling algorithms for improving biomass estimation in a moist tropical forest of the Brazilian Amazon

    Science.gov (United States)

    Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella

    2017-01-01

    Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....

  12. A study of biorthogonal multiple vector-valued wavelets

    International Nuclear Information System (INIS)

    Han Jincang; Cheng Zhengxing; Chen Qingjiang

    2009-01-01

    The notion of vector-valued multiresolution analysis is introduced and the concept of biorthogonal multiple vector-valued wavelets which are wavelets for vector fields, is introduced. It is proved that, like in the scalar and multiwavelet case, the existence of a pair of biorthogonal multiple vector-valued scaling functions guarantees the existence of a pair of biorthogonal multiple vector-valued wavelet functions. An algorithm for constructing a class of compactly supported biorthogonal multiple vector-valued wavelets is presented. Their properties are investigated by means of operator theory and algebra theory and time-frequency analysis method. Several biorthogonality formulas regarding these wavelet packets are obtained.

  13. Solution of wave-like equation based on Haar wavelet

    Directory of Open Access Journals (Sweden)

    Naresh Berwal

    2012-11-01

    Full Text Available Wavelet transform and wavelet analysis are powerful mathematical tools for many problems. Wavelet also can be applied in numerical analysis. In this paper, we apply Haar wavelet method to solve wave-like equation with initial and boundary conditions known. The fundamental idea of Haar wavelet method is to convert the differential equations into a group of algebraic equations, which involves a finite number or variables. The results and graph show that the proposed way is quite reasonable when compared to exact solution.

  14. Estimation of impact from natural sources of radiation sources in two non nuclear plant workers and nearby residents

    International Nuclear Information System (INIS)

    Sousa, Wanderson de Oliveira

    2005-09-01

    Naturally occurring radioactive materials, often referred to as NORM, are and always have been a part of our world. Our planet 'Earth' and its atmosphere contain many different types of naturally occurring radioactive species , mainly minerals containing radionuclides of uranium and thorium decay series. Human activities for e x p l o i t a t i o n of mineral resources as mining, necessarily, do not enhance the concentration of NORM in products , by-products or residues, but can be a concern, simply due to the increased potential for human exposure. The goal of this work is to assess the impact of the presence of two non-nuclear plants (coal mining and monazite extraction plant) to workers and general population living in the vicinities of plants, by calculating their internal exposure to natural radionuclides . Excreta samples (urine and feces) were collected from workers and inhabitants of the two small towns where workers reside. The activities of 238 U, 234 U ( o n l y in feces), 226 Ra , 210 Pb and 210 Po (only in urine),- present in the samples were determined. The results of daily excretion in urine and feces of the groups, indicate that workers from coal mining, are exposed to natural radionuclides by inhalation and ingestion. The intake of some radionuclides ( 238 U and 210 Po ) are influenced by the professional activity . The results also indicate a chronic intake of 226 Ra by workers of the coal mining and their neighbors. Workers from the monazite extraction plant and inhabitants of the vicinity of the plant are exposed, mainly by ingestion. The intake through diet is the major source of incorporation of natural radionuclides. (author)

  15. DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    K. B. Cui

    2017-12-01

    Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.

  16. Monitoring the size and protagonists of the drug market: combining supply and demand data sources and estimates.

    Science.gov (United States)

    Rossi, Carla

    2013-06-01

    The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.

  17. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    Science.gov (United States)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  18. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    Science.gov (United States)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  19. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  20. Quantum dynamics and electronic spectroscopy within the framework of wavelets

    International Nuclear Information System (INIS)

    Toutounji, Mohamad

    2013-01-01

    This paper serves as a first-time report on formulating important aspects of electronic spectroscopy and quantum dynamics in condensed harmonic systems using the framework of wavelets, and a stepping stone to our future work on developing anharmonic wavelets. The Morlet wavelet is taken to be the mother wavelet for the initial state of the system of interest. This work reports daughter wavelets that may be used to study spectroscopy and dynamics of harmonic systems. These wavelets are shown to arise naturally upon optical electronic transition of the system of interest. Natural birth of basis (daughter) wavelets emerging on exciting an electronic two-level system coupled, both linearly and quadratically, to harmonic phonons is discussed. It is shown that this takes place through using the unitary dilation and translation operators, which happen to be part of the time evolution operator of the final electronic state. The corresponding optical autocorrelation function and linear absorption spectra are calculated to test the applicability and correctness of the herein results. The link between basis wavelets and the Liouville space generating function is established. An anharmonic mother wavelet is also proposed in the case of anharmonic electron–phonon coupling. A brief description of deriving anharmonic wavelets and the corresponding anharmonic Liouville space generating function is explored. In conclusion, a mother wavelet (be it harmonic or anharmonic) which accounts for Duschinsky mixing is suggested. (paper)

  1. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  2. Cross dynamics of oil-stock interactions: A redundant wavelet analysis

    International Nuclear Information System (INIS)

    Jammazi, Rania

    2012-01-01

    on the oil – stock market linkages' sensitivity to the degree of improvement in energy efficiency of a given country, the degree of oil shock persistence, and to whether a country is oil importer or exporter (among other suggested factors). -- Highlights: ► Mixed results are generally found for the stock-oil markets interactions. ► Standards techniques suffer from a series of gaps and not provide robust conclusions. ► Harr à trous wavelet appear to be more efficient tool to circumvent those lacks. ► We explore stock CO market nexus by wavelet correlation variance and cross-correlation. ► Results depend on the country oil dependency, source and persistency of CO changes.

  3. Decay ratio studies in BWR and PWR using wavelet

    International Nuclear Information System (INIS)

    Ciftcioglu, Oe.

    1996-10-01

    The on-line stability of BWR and PWR is studied using the neutron noise signals as the fluctuations reflect the dynamic characteristics of the reactor. Using appropriate signal modeling for time domain analysis of noise signals, the stability parameters can be directly obtained from the system impulse response. Here in particular for BWR, an important stability parameter is the decay ratio (DR) of the impulse response. The time series analysis involves the autoregressive modeling of the neutron detector signal. The DR determination is strongly effected by the low frequency behaviour since the transfer function characteristic tends to be a third order system rather than a second order system for a BWR. In a PWR low frequency behaviour is modified by the Boron concentration. As a result of these phenomena there are difficulties in the consistent determination of the DR oscillations. The enhancement of the consistency of this DR estimation is obtained by wavelet transform using actual power plant data from BWR and PWR. A comparative study of the Restimation with and without wavelets are presented. (orig.)

  4. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    Science.gov (United States)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  5. Estimated Intakes and Sources of Total and Added Sugars in the Canadian Diet

    Directory of Open Access Journals (Sweden)

    Tristin D. Brisbois

    2014-05-01

    Full Text Available National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data and adjusting these for wastage to estimate intakes. Added sugars intakes were estimated from CCHS data by categorizing the sugars content of food groups as either added or naturally occurring. Added sugars accounted for approximately half of total sugars consumed. Annual availability data were obtained from Statistics Canada CANSIM database. Estimates for added sugars were obtained by summing the availability of “sugars and syrups” with availability of “soft drinks” (proxy for high fructose corn syrup and adjusting for waste. Analysis of both survey and availability data suggests that added sugars average 11%–13% of total energy intake. Availability data indicate that added sugars intakes have been stable or modestly declining as a percent of total energy over the past three decades. Although these are best estimates based on available data, this analysis may encourage the development of better databases to help inform public policy recommendations.

  6. Estimated intakes and sources of total and added sugars in the Canadian diet.

    Science.gov (United States)

    Brisbois, Tristin D; Marsden, Sandra L; Anderson, G Harvey; Sievenpiper, John L

    2014-05-08

    National food supply data and dietary surveys are essential to estimate nutrient intakes and monitor trends, yet there are few published studies estimating added sugars consumption. The purpose of this report was to estimate and trend added sugars intakes and their contribution to total energy intake among Canadians by, first, using Canadian Community Health Survey (CCHS) nutrition survey data of intakes of sugars in foods and beverages, and second, using Statistics Canada availability data and adjusting these for wastage to estimate intakes. Added sugars intakes were estimated from CCHS data by categorizing the sugars content of food groups as either added or naturally occurring. Added sugars accounted for approximately half of total sugars consumed. Annual availability data were obtained from Statistics Canada CANSIM database. Estimates for added sugars were obtained by summing the availability of "sugars and syrups" with availability of "soft drinks" (proxy for high fructose corn syrup) and adjusting for waste. Analysis of both survey and availability data suggests that added sugars average 11%-13% of total energy intake. Availability data indicate that added sugars intakes have been stable or modestly declining as a percent of total energy over the past three decades. Although these are best estimates based on available data, this analysis may encourage the development of better databases to help inform public policy recommendations.

  7. Application of wavelet transform to seismic data; Wavelet henkan no jishin tansa eno tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Nakagami, K; Murayama, R; Matsuoka, T [Japan National Oil Corp., Tokyo (Japan)

    1996-05-01

    Introduced herein is the use of the wavelet transform in the field of seismic exploration. Among applications so far made, there are signal filtering, break point detection, data compression, and the solution of finite differential equations in the wavelet domain. In the field of data compression in particular, some examples of practical application have been introduced already. In seismic exploration, it is expected that the wavelet transform will separate signals and noises in data in a way different from the Fourier transform. The continuous wavelet transform displays time change in frequency easy to read, but is not suitable for the analysis and processing large quantities of data. On the other hand, the discrete wavelet transform, being an orthogonal transform, can handle large quantities of data. As compared with the conventional Fourier transform that handles only the frequency domain, the wavelet transform handles the time domain as well as the frequency domain, and therefore is more convenient in handling unsteady signals. 9 ref., 8 figs.

  8. Information retrieval system utilizing wavelet transform

    Science.gov (United States)

    Brewster, Mary E.; Miller, Nancy E.

    2000-01-01

    A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.

  9. monthly energy consumption forecasting using wavelet analysis

    African Journals Online (AJOL)

    User

    ABSTRACT. Monthly energy forecasts help heavy consumers of electric power to prepare adequate budget to pay their electricity bills and also draw the attention of management and stakeholders to electric- ity consumption levels so that energy efficiency measures are put in place to reduce cost. In this paper, a wavelet ...

  10. Multiscale wavelet representations for mammographic feature analysis

    Science.gov (United States)

    Laine, Andrew F.; Song, Shuwu

    1992-12-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  11. Wavelet based multicarrier code division multiple access ...

    African Journals Online (AJOL)

    This paper presents the study on Wavelet transform based Multicarrier Code Division Multiple Access (MC-CDMA) system for a downlink wireless channel. The performance of the system is studied for Additive White Gaussian Noise Channel (AWGN) and slowly varying multipath channels. The bit error rate (BER) versus ...

  12. Estimating photometric redshifts for X-ray sources in the X-ATLAS field using machine-learning techniques

    Science.gov (United States)

    Mountrichas, G.; Corral, A.; Masoura, V. A.; Georgantopoulos, I.; Ruiz, A.; Georgakakis, A.; Carrera, F. J.; Fotopoulou, S.

    2017-12-01

    We present photometric redshifts for 1031 X-ray sources in the X-ATLAS field using the machine-learning technique TPZ. X-ATLAS covers 7.1 deg2 observed with XMM-Newton within the Science Demonstration Phase of the H-ATLAS field, making it one of the largest contiguous areas of the sky with both XMM-Newton and Herschel coverage. All of the sources have available SDSS photometry, while 810 additionally have mid-IR and/or near-IR photometry. A spectroscopic sample of 5157 sources primarily in the XMM/XXL field, but also from several X-ray surveys and the SDSS DR13 redshift catalogue, was used to train the algorithm. Our analysis reveals that the algorithm performs best when the sources are split, based on their optical morphology, into point-like and extended sources. Optical photometry alone is not enough to estimate accurate photometric redshifts, but the results greatly improve when at least mid-IR photometry is added in the training process. In particular, our measurements show that the estimated photometric redshifts for the X-ray sources of the training sample have a normalized absolute median deviation, nmad ≈ 0.06, and a percentage of outliers, η = 10-14%, depending upon whether the sources are extended or point like. Our final catalogue contains photometric redshifts for 933 out of the 1031 X-ray sources with a median redshift of 0.9. The table of the photometric redshifts is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A39

  13. Estimate of production of medical isotopes by photo-neutron reaction at the Canadian Light Source

    Science.gov (United States)

    Szpunar, B.; Rangacharyulu, C.; Daté, S.; Ejiri, H.

    2013-11-01

    In contrast to conventional bremsstrahlung photon beam sources, laser backscatter photon sources at electron synchrotrons provide the capability to selectively tune photons to energies of interest. This feature, coupled with the ubiquitous giant dipole resonance excitations of atomic nuclei, promises a fertile method of nuclear isotope production. In this article, we present the results of simulations of production of the medical/industrial isotopes 196Au, 192Ir and 99Mo by (γ,n) reactions. We employ FLUKA Monte Carlo code along with the simulated photon flux for a beamline at the Canadian Light Source in conjunction with a CO2 laser system.

  14. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    Science.gov (United States)

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  15. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    Directory of Open Access Journals (Sweden)

    M. D. Peláez-Coca

    2013-01-01

    Full Text Available A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}% and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%. The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration.

  16. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-01-01

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown

  17. Data assimilation and source term estimation during the early phase of a nuclear accident

    Energy Technology Data Exchange (ETDEWEB)

    Golubenkov, A.; Borodin, R. [SPA Typhoon, Emergency Centre (Russian Federation); Sohier, A.; Rojas Palma, C. [Centre de l`Etude de l`Energie Nucleaire, Mol (Belgium)

    1996-02-01

    The mathematical/physical base of possible methods to model the source term during an accidental release of radionuclides is discussed. Knowledge of the source term is important in view of optimizing urgent countermeasures to the population. In most cases however, it will be impossible to assess directly the release dynamics. Therefore methods are under development in which the source term is modelled, based on the comparison of off-site monitoring data and model predictions using an atmospheric dispersion model. The degree of agreement between the measured and calculated characteristics of the radioactive contamination of the air and the ground surface is an important criterion in this process. Due to the inherent complexity, some geometrical transformations taking space-time discrepancies between observed and modelled contamination fields are defined before the source term is adapted. This work describes the developed algorithms which are also tested against data from some tracer experiments performed in the past. This method is also used to reconstruct the dynamics of the Chernobyl source term. Finally this report presents a concept of software to reconstruct a multi-isotopic source term in real-time.

  18. Data assimilation and source term estimation during the early phase of a nuclear accident

    International Nuclear Information System (INIS)

    Golubenkov, A.; Borodin, R.; Sohier, A.; Rojas Palma, C.

    1996-02-01

    The mathematical/physical base of possible methods to model the source term during an accidental release of radionuclides is discussed. Knowledge of the source term is important in view of optimizing urgent countermeasures to the population. In most cases however, it will be impossible to assess directly the release dynamics. Therefore methods are under development in which the source term is modelled, based on the comparison of off-site monitoring data and model predictions using an atmospheric dispersion model. The degree of agreement between the measured and calculated characteristics of the radioactive contamination of the air and the ground surface is an important criterion in this process. Due to the inherent complexity, some geometrical transformations taking space-time discrepancies between observed and modelled contamination fields are defined before the source term is adapted. This work describes the developed algorithms which are also tested against data from some tracer experiments performed in the past. This method is also used to reconstruct the dynamics of the Chernobyl source term. Finally this report presents a concept of software to reconstruct a multi-isotopic source term in real-time

  19. Application of Shannon Wavelet Entropy and Shannon Wavelet Packet Entropy in Analysis of Power System Transient Signals

    Directory of Open Access Journals (Sweden)

    Jikai Chen

    2016-12-01

    Full Text Available In a power system, the analysis of transient signals is the theoretical basis of fault diagnosis and transient protection theory. Shannon wavelet entropy (SWE and Shannon wavelet packet entropy (SWPE are powerful mathematics tools for transient signal analysis. Combined with the recent achievements regarding SWE and SWPE, their applications are summarized in feature extraction of transient signals and transient fault recognition. For wavelet aliasing at adjacent scale of wavelet decomposition, the impact of wavelet aliasing is analyzed for feature extraction accuracy of SWE and SWPE, and their differences are compared. Meanwhile, the analyses mentioned are verified by partial discharge (PD feature extraction of power cable. Finally, some new ideas and further researches are proposed in the wavelet entropy mechanism, operation speed and how to overcome wavelet aliasing.

  20. Period-dependent source rupture behavior of the 2011 Tohoku earthquake estimated by multi period-band Bayesian waveform inversion

    Science.gov (United States)

    Kubo, H.; Asano, K.; Iwata, T.; Aoi, S.

    2014-12-01

    Previous studies for the period-dependent source characteristics of the 2011 Tohoku earthquake (e.g., Koper et al., 2011; Lay et al., 2012) were based on the short and long period source models using different method. Kubo et al. (2013) obtained source models of the 2011 Tohoku earthquake using multi period-bands waveform data by a common inversion method and discussed its period-dependent source characteristics. In this study, to achieve more in detail spatiotemporal source rupture behavior of this event, we introduce a new fault surface model having finer sub-fault size and estimate the source models in multi period-bands using a Bayesian inversion method combined with a multi-time-window method. Three components of velocity waveforms at 25 stations of K-NET, KiK-net, and F-net of NIED are used in this analysis. The target period band is 10-100 s. We divide this period band into three period bands (10-25 s, 25-50 s, and 50-100 s) and estimate a kinematic source model in each period band using a Bayesian inversion method with MCMC sampling (e.g., Fukuda & Johnson, 2008; Minson et al., 2013, 2014). The parameterization of spatiotemporal slip distribution follows the multi-time-window method (Hartzell & Heaton, 1983). The Green's functions are calculated by the 3D FDM (GMS; Aoi & Fujiwara, 1999) using a 3D velocity structure model (JIVSM; Koketsu et al., 2012). The assumed fault surface model is based on the Pacific plate boundary of JIVSM and is divided into 384 subfaults of about 16 * 16 km^2. The estimated source models in multi period-bands show the following source image: (1) First deep rupture off Miyagi at 0-60 s toward down-dip mostly radiating relatively short period (10-25 s) seismic waves. (2) Shallow rupture off Miyagi at 45-90 s toward up-dip with long duration radiating long period (50-100 s) seismic wave. (3) Second deep rupture off Miyagi at 60-105 s toward down-dip radiating longer period seismic waves then that of the first deep rupture. (4) Deep

  1. Inverse estimation of source parameters of oceanic radioactivity dispersion models associated with the Fukushima accident

    Directory of Open Access Journals (Sweden)

    Y. Miyazawa

    2013-04-01

    Full Text Available With combined use of the ocean–atmosphere simulation models and field observation data, we evaluate the parameters associated with the total caesium-137 amounts of the direct release into the ocean and atmospheric deposition over the western North Pacific caused by the accident of Fukushima Daiichi nuclear power plant (FNPP that occurred in March 2011. The Green's function approach is adopted for the estimation of two parameters determining the total emission amounts for the period from 12 March to 6 May 2011. It is confirmed that the validity of the estimation depends on the simulation skill near FNPP. The total amount of the direct release is estimated as 5.5–5.9 × 1015 Bq, while that of the atmospheric deposition is estimated as 5.5–9.7 × 1015 Bq, which indicates broader range of the estimate than that of the direct release owing to uncertainty of the dispersion widely spread over the western North Pacific.

  2. Seasonal variation and source estimation of organic compounds in urban aerosol of Augsburg, Germany

    International Nuclear Information System (INIS)

    Pietrogrande, Maria Chiara; Abbaszade, Guelcin; Schnelle-Kreis, Juergen; Bacco, Dimitri; Mercuriali, Mattia; Zimmermann, Ralf

    2011-01-01

    This study reports a general assessment of the organic composition of the PM 2.5 samples collected in the city of Augsburg, Germany in a summer (August-September 2007) and a winter (February-March 2008) campaign of 36 and 30 days, respectively. The samples were directly submitted to in-situ derivatisation thermal desorption gas chromatography coupled with time of flight mass spectrometry (IDTD-GC-TOFMS) to simultaneously determine the concentrations of many classes of molecular markers, such as n-alkanes, iso- and anteiso-alkanes, polycyclic aromatic hydrocarbons (PAHs), oxidized PAHs, n-alkanoic acids, alcohols, saccharides and others. The PCA analysis of the data identified the contributions of three emission sources, i.e., combustion sources, including fossil fuel emissions and biomass burning, vegetative detritus, and oxidized PAHs. The PM chemical composition shows seasonal trend: winter is characterized by high contribution of petroleum/wood combustion while the vegetative component and atmospheric photochemical reactions are predominant in the hot season. - Highlights: → 59 molecular markers were simultaneously determined by thermal desorption GC-MS. → Organic composition of urban PM 2.5 in Augsburg, Germany, was characterized. → Fossil fuel, vegetative detritus, coal/wood burning are the main sources. → Seasonal trends winter vs. summer were identified. - Organic composition of the urban PM 2.5 identifies seasonal trend of the main sources: fossil fuel and biomass combustion sources, vegetative detritus, atmospheric photochemical reactions.

  3. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-09-06

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  4. Microseismic imaging using a source-independent full-waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2016-01-01

    Using full waveform inversion (FWI) to locate microseismic and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, waveform inversion of microseismic events faces incredible nonlinearity due to the unknown source location (space) and function (time). We develop a source independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with the observed and modeled data to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for source wavelet in z axis is extracted to check the accuracy of the inverted source image and velocity model. Also the angle gather is calculated to see if the velocity model is correct. By inverting for all the source image, source wavelet and the velocity model, the proposed method produces good estimates of the source location, ignition time and the background velocity for part of the SEG overthrust model.

  5. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    Science.gov (United States)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections

  6. Block-classified bidirectional motion compensation scheme for wavelet-decomposed digital video

    Energy Technology Data Exchange (ETDEWEB)

    Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Y.Q. [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States)

    1997-08-01

    In this paper the authors introduce a block-classified bidirectional motion compensation scheme for the previously developed wavelet-based video codec, where multiresolution motion estimation is performed in the wavelet domain. The frame classification structure described in this paper is similar to that used in the MPEG standard. Specifically, the I-frames are intraframe coded, the P-frames are interpolated from a previous I- or a P-frame, and the B-frames are bidirectional interpolated frames. They apply this frame classification structure to the wavelet domain with variable block sizes and multiresolution representation. They use a symmetric bidirectional scheme for the B-frames and classify the motion blocks as intraframe, compensated either from the preceding or the following frame, or bidirectional (i.e., compensated based on which type yields the minimum energy). They also introduce the concept of F-frames, which are analogous to P-frames but are predicted from the following frame only. This improves the overall quality of the reconstruction in a group of pictures (GOP) but at the expense of extra buffering. They also study the effect of quantization of the I-frames on the reconstruction of a GOP, and they provide intuitive explanation for the results. In addition, the authors study a variety of wavelet filter-banks to be used in a multiresolution motion-compensated hierarchical video codec.

  7. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    Science.gov (United States)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  8. Pyramidal Watershed Segmentation Algorithm for High-Resolution Remote Sensing Images Using Discrete Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    K. Parvathi

    2009-01-01

    Full Text Available The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.

  9. Extrapolating cosmic ray variations and impacts on life: Morlet wavelet analysis

    Science.gov (United States)

    Zarrouk, N.; Bennaceur, R.

    2009-07-01

    Exposure to cosmic rays may have both a direct and indirect effect on Earth's organisms. The radiation may lead to higher rates of genetic mutations in organisms, or interfere with their ability to repair DNA damage, potentially leading to diseases such as cancer. Increased cloud cover, which may cool the planet by blocking out more of the Sun's rays, is also associated with cosmic rays. They also interact with molecules in the atmosphere to create nitrogen oxide, a gas that eats away at our planet's ozone layer, which protects us from the Sun's harmful ultraviolet rays. On the ground, humans are protected from cosmic particles by the planet's atmosphere. In this paper we give estimated results of wavelet analysis from solar modulation and cosmic ray data incorporated in time-dependent cosmic ray variation. Since solar activity can be described as a non-linear chaotic dynamic system, methods such as neural networks and wavelet methods should be very suitable analytical tools. Thus we have computed our results using Morlet wavelets. Many have used wavelet techniques for studying solar activity. Here we have analysed and reconstructed cosmic ray variation, and we have better depicted periods or harmonics other than the 11-year solar modulation cycles.

  10. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    Science.gov (United States)

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  11. 3D High Resolution Mesh Deformation Based on Multi Library Wavelet Neural Network Architecture

    Science.gov (United States)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Amar, Chokri Ben

    2016-12-01

    This paper deals with the features of a novel technique for large Laplacian boundary deformations using estimated rotations. The proposed method is based on a Multi Library Wavelet Neural Network structure founded on several mother wavelet families (MLWNN). The objective is to align features of mesh and minimize distortion with a fixed feature that minimizes the sum of the distances between all corresponding vertices. New mesh deformation method worked in the domain of Region of Interest (ROI). Our approach computes deformed ROI, updates and optimizes it to align features of mesh based on MLWNN and spherical parameterization configuration. This structure has the advantage of constructing the network by several mother wavelets to solve high dimensions problem using the best wavelet mother that models the signal better. The simulation test achieved the robustness and speed considerations when developing deformation methodologies. The Mean-Square Error and the ratio of deformation are low compared to other works from the state of the art. Our approach minimizes distortions with fixed features to have a well reconstructed object.

  12. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  13. Joint sensor location/power rating optimization for temporally-correlated source estimation

    KAUST Repository

    Bushnaq, Osama M.

    2017-12-22

    The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.

  14. Wavelet-like bases for thin-wire integral equations in electromagnetics

    Science.gov (United States)

    Francomano, E.; Tortorici, A.; Toscano, E.; Ala, G.; Viola, F.

    2005-03-01

    In this paper, wavelets are used in solving, by the method of moments, a modified version of the thin-wire electric field integral equation, in frequency domain. The time domain electromagnetic quantities, are obtained by using the inverse discrete fast Fourier transform. The retarded scalar electric and vector magnetic potentials are employed in order to obtain the integral formulation. The discretized model generated by applying the direct method of moments via point-matching procedure, results in a linear system with a dense matrix which have to be solved for each frequency of the Fourier spectrum of the time domain impressed source. Therefore, orthogonal wavelet-like basis transform is used to sparsify the moment matrix. In particular, dyadic and M-band wavelet transforms have been adopted, so generating different sparse matrix structures. This leads to an efficient solution in solving the resulting sparse matrix equation. Moreover, a wavelet preconditioner is used to accelerate the convergence rate of the iterative solver employed. These numerical features are used in analyzing the transient behavior of a lightning protection system. In particular, the transient performance of the earth termination system of a lightning protection system or of the earth electrode of an electric power substation, during its operation is focused. The numerical results, obtained by running a complex structure, are discussed and the features of the used method are underlined.

  15. Automated Classification and Removal of EEG Artifacts With SVM and Wavelet-ICA.

    Science.gov (United States)

    Sai, Chong Yeh; Mokhtar, Norrima; Arof, Hamzah; Cumming, Paul; Iwahashi, Masahiro

    2018-05-01

    Brain electrical activity recordings by electroencephalography (EEG) are often contaminated with signal artifacts. Procedures for automated removal of EEG artifacts are frequently sought for clinical diagnostics and brain-computer interface applications. In recent years, a combination of independent component analysis (ICA) and discrete wavelet transform has been introduced as standard technique for EEG artifact removal. However, in performing the wavelet-ICA procedure, visual inspection or arbitrary thresholding may be required for identifying artifactual components in the EEG signal. We now propose a novel approach for identifying artifactual components separated by wavelet-ICA using a pretrained support vector machine (SVM). Our method presents a robust and extendable system that enables fully automated identification and removal of artifacts from EEG signals, without applying any arbitrary thresholding. Using test data contaminated by eye blink artifacts, we show that our method performed better in identifying artifactual components than did existing thresholding methods. Furthermore, wavelet-ICA in conjunction with SVM successfully removed target artifacts, while largely retaining the EEG source signals of interest. We propose a set of features including kurtosis, variance, Shannon's entropy, and range of amplitude as training and test data of SVM to identify eye blink artifacts in EEG signals. This combinatorial method is also extendable to accommodate multiple types of artifacts present in multichannel EEG. We envision future research to explore other descriptive features corresponding to other types of artifactual components.

  16. Harmonic analysis of traction power supply system based on wavelet decomposition

    Science.gov (United States)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.

  17. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  18. An examination of sources of sensitivity of consumer surplus estimates in travel cost models.

    Science.gov (United States)

    Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E

    2015-03-15

    We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Source term estimation based on in-situ gamma spectrometry using a high purity germanium detector

    International Nuclear Information System (INIS)

    Pauly, J.; Rojas-Palma, C.; Sohier, A.

    1997-06-01

    An alternative method to reconstruct the source term of a nuclear accident is proposed. The technique discussed here involves the use of in-situ gamma spectrometry. The validation of the applied methodology has been possible through the monitoring of routine releases of Ar-41 originating at a Belgian site from an air cooled graphite research reactor. This technique provides a quick nuclide specific decomposition of the source term and therefore will be have an enormous potential if implemented in nuclear emergency preparedness and radiological assessments of nuclear accidents during the early phase

  20. Characterization and source estimation of size-segregated aerosols during 2008-2012 in an urban environment in Beijing

    International Nuclear Information System (INIS)

    Yu, Lingda; Wang, Guangfu; Zhang, Renjiang

    2013-01-01

    Full text: During 2008-2012, size-segregated aerosol samples were collected using an eight-stage cascade impactor at Beijing Normal University (BNU) Site, China. These samples were analyzed using particle induced X-ray emission (PIXE) analysis for concentrations of 21 elements consisting of Mg, AI, Si, P, S, CI, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Ba and Pb. The size-resolved data sets were then analyzed using the positive matrix factorization (PMF) technique in order to identify possible sources and estimate their contribution to particulate matter mass. Nine sources were resolved in eight size ranges (025 ∼ 16μm) and included secondary sulphur, motor vehicles, coal combustion; oil combustion, road dust, biomass burning, soil dust, diesel vehicles and metal processing. PMF analysis of size-resolved source contributions showed that natural sources represented by soil dust and road dust contributed about 57% to the predicted primary particulate matter (PM) mass in the coarse size range(>2μm). On the other hand, anthropogenic sources such as secondary sulphur, coal and oil combustion, biomass burning and motor vehicle contributed about 73% in the fine size range <2μm). The diesel vehicles and secondary sulphur source contributed the most in the ultra-fine size range (<0.25μm) and was responsible for about 52% of the primary PM mass. (author)