WorldWideScience

Sample records for gaussian dispersion algorithm

  1. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  2. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  3. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    Science.gov (United States)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  4. Comparison of results from dispersion models for regulatory purposes based on Gaussian-and Lagrangian-algorithms: an evaluating literature study

    International Nuclear Information System (INIS)

    Walter, H.

    2004-01-01

    Powerful tools to describe atmospheric transport processes for radiation protection can be provided by meteorology; these are atmospheric flow and dispersion models. Concerning dispersion models, Gaussian plume models have been used since a long time to describe atmospheric dispersion processes. Advantages of the Gaussian plume models are short computation time, good validation and broad acceptance worldwide. However, some limitations and their implications on model result interpretation have to be taken into account, as the mathematical derivation of an analytic solution of the equations of motion leads to severe constraints. In order to minimise these constraints, various dispersion models for scientific and regulatory purposes have been developed and applied. Among these the Lagrangian particle models are of special interest, because these models are able to simulate atmospheric transport processes close to reality, e.g. the influence of orography, topography, wind shear and other meteorological phenomena. Within this study, the characteristics and computational results of Gaussian dispersion models as well as of Lagrangian models have been compared and evaluated on the base of numerous papers and reports published in literature. Special emphasis has been laid on the intention that dispersion models should comply with EU requests (Richtlinie 96/29/Euratom, 1996) on a more realistic assessment of the radiation exposure to the population. (orig.)

  5. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  6. Mixed Platoon Flow Dispersion Model Based on Speed-Truncated Gaussian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Weitiao Wu

    2013-01-01

    Full Text Available A mixed traffic flow feature is presented on urban arterials in China due to a large amount of buses. Based on field data, a macroscopic mixed platoon flow dispersion model (MPFDM was proposed to simulate the platoon dispersion process along the road section between two adjacent intersections from the flow view. More close to field observation, truncated Gaussian mixture distribution was adopted as the speed density distribution for mixed platoon. Expectation maximum (EM algorithm was used for parameters estimation. The relationship between the arriving flow distribution at downstream intersection and the departing flow distribution at upstream intersection was investigated using the proposed model. Comparison analysis using virtual flow data was performed between the Robertson model and the MPFDM. The results confirmed the validity of the proposed model.

  7. A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements

    International Nuclear Information System (INIS)

    Yuan, Y B; Piao, W Y; Xu, J B

    2007-01-01

    The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements

  8. A fast Gaussian filtering algorithm for three-dimensional surface roughness measurements

    Science.gov (United States)

    Yuan, Y. B.; Piao, W. Y.; Xu, J. B.

    2007-07-01

    The two-dimensional (2-D) Gaussian filter can be separated into two one-dimensional (1-D) Gaussian filters. The 1-D Gaussian filter can be implemented approximately by the cascaded Butterworth filters. The approximation accuracy will be improved with the increase of the number of the cascaded filters. A recursive algorithm for Gaussian filtering requires a relatively small number of simple mathematical operations such as addition, subtraction, multiplication, or division, so that it has considerable computational efficiency and it is very useful for three-dimensional (3-D) surface roughness measurements. The zero-phase-filtering technique is used in this algorithm, so there is no phase distortion in the Gaussian filtered mean surface. High-order approximation Gaussian filters are proposed for practical use to assure high accuracy of Gaussian filtering of 3-D surface roughness measurements.

  9. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  10. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  11. Effects of dispersion and longitudinal chromatic aberration on the focusing of isodiffracting pulsed Gaussian light beam

    International Nuclear Information System (INIS)

    Deng Dongmei; Guo Hong; Han Dingan; Liu Mingwei; Li Changfu

    2005-01-01

    Taking into account the dispersion and the longitudinal chromatic aberration (LCA) of the material of the lens, focusing of isodiffracting pulsed Gaussian light beam through single lens is analyzed. The smaller the cycle number of the isodiffracting pulsed Gaussian light beam is, the higher the order of the material dispersion should be considered

  12. A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum

    Directory of Open Access Journals (Sweden)

    Pan Liu

    2017-05-01

    Full Text Available This paper presents a wavelet-based Gaussian method (WGM for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF. The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.

  13. Blind signal processing algorithms under DC biased Gaussian noise

    Science.gov (United States)

    Kim, Namyong; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Distortions caused by the DC-biased laser input can be modeled as DC biased Gaussian noise and removing DC bias is important in the demodulation process of the electrical signal in most optical communications. In this paper, a new performance criterion and a related algorithm for unsupervised equalization are proposed for communication systems in the environment of channel distortions and DC biased Gaussian noise. The proposed criterion utilizes the Euclidean distance between the Dirac-delta function located at zero on the error axis and a probability density function of biased constant modulus errors, where constant modulus error is defined by the difference between the system out and a constant modulus calculated from the transmitted symbol points. From the results obtained from the simulation under channel models with fading and DC bias noise abruptly added to background Gaussian noise, the proposed algorithm converges rapidly even after the interruption of DC bias proving that the proposed criterion can be effectively applied to optical communication systems corrupted by channel distortions and DC bias noise.

  14. Stochastic cluster algorithms for discrete Gaussian (SOS) models

    International Nuclear Information System (INIS)

    Evertz, H.G.; Hamburg Univ.; Hasenbusch, M.; Marcu, M.; Tel Aviv Univ.; Pinn, K.; Muenster Univ.; Solomon, S.

    1990-10-01

    We present new Monte Carlo cluster algorithms which eliminate critical slowing down in the simulation of solid-on-solid models. In this letter we focus on the two-dimensional discrete Gaussian model. The algorithms are based on reflecting the integer valued spin variables with respect to appropriately chosen reflection planes. The proper choice of the reflection plane turns out to be crucial in order to obtain a small dynamical exponent z. Actually, the successful versions of our algorithm are a mixture of two different procedures for choosing the reflection plane, one of them ergodic but slow, the other one non-ergodic and also slow when combined with a Metropolis algorithm. (orig.)

  15. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Xu

    2014-01-01

    Full Text Available Estimation of distribution algorithm (EDA is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  16. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    Science.gov (United States)

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  17. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Directory of Open Access Journals (Sweden)

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  18. On the relationship between Gaussian stochastic blockmodels and label propagation algorithms

    International Nuclear Information System (INIS)

    Zhang, Junhao; Hu, Junfeng; Chen, Tongfei

    2015-01-01

    The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks. (paper)

  19. Dispersion under low wind speed conditions using Gaussian Plume approach

    International Nuclear Information System (INIS)

    Rakesh, P.T.; Srinivas, C.V.; Baskaran, R.; Venkatesan, R.; Venkatraman, B.

    2018-01-01

    For radioactive dose computation due to atmospheric releases, dispersion models are essential requirement. For this purpose, Gaussian plume model (GPM) is used in the short range and advanced particle dispersion models are used in all ranges. In dispersion models, other than wind speed the most influential parameter which determines the fate of the pollutant is the turbulence diffusivity. In GPM the diffusivity is represented using empirical approach. Studies show that under low wind speed conditions, the existing diffusivity relationships are not adequate in estimating the diffusion. An important phenomenon that occurs during the low wind speed is the meandering motions. It is found that under meandering motions the extent of plume dispersion is more than the estimated value using conventional GPM and particle transport models. In this work a set of new turbulence parameters for the horizontal diffusion of the plume is suggested and using them in GPM, the plume is simulated and is compared against observation available from Hanford tracer release experiment

  20. State of the art atmospheric dispersion modelling. Should the Gaussian plume model still be used?

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)

    2016-11-15

    For regulatory purposes with respect to licensing and supervision of airborne releases of nuclear installations, the Gaussian plume model is still in use in Germany. However, for complex situations the Gaussian plume model is to be replaced by a Lagrangian particle model. Now the new EU basic safety standards for protection against the dangers arising from exposure to ionising radiation (EU BSS) [1] asks for a realistic assessment of doses to the members of the public from authorised practices. This call for a realistic assessment raises the question whether dispersion modelling with the Gaussian plume model is an adequate approach anymore or whether the use of more complex models is mandatory.

  1. User's manual for DWNWND: an interactive Gaussian plume atmospheric transport model with eight dispersion parameter options

    International Nuclear Information System (INIS)

    Fields, D.E.; Miller, C.W.

    1980-05-01

    The most commonly used approach for estimating the atmospheric concentration and deposition of material downwind from its point of release is the Gaussian plume atmospheric dispersion model. Two of the critical parameters in this model are sigma/sub y/ and sigma/sub z/, the horizontal and vertical dispersion parameters, respectively. A number of different sets of values for sigma/sub y/ and sigma/sub z/ have been determined empirically for different release heights and meteorological and terrain conditions. The computer code DWNWND, described in this report, is an interactive implementation of the Gaussian plume model. This code allows the user to specify any one of eight different sets of the empirically determined dispersion paramters. Using the selected dispersion paramters, ground-level normalized exposure estimates are made at any specified downwind distance. Computed values may be corrected for plume depletion due to deposition and for plume settling due to gravitational fall. With this interactive code, the user chooses values for ten parameters which define the source, the dispersion and deposition process, and the sampling point. DWNWND is written in FORTRAN for execution on a PDP-10 computer, requiring less than one second of central processor unit time for each simulation

  2. Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter

    Directory of Open Access Journals (Sweden)

    Xiaoxi Yan

    2014-01-01

    Full Text Available As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

  3. EDPUFF- a Gaussian dispersion code for consequence analysis

    International Nuclear Information System (INIS)

    Oza, R.B.; Bapat, V.N.; Nair, R.N.; Hukkoo, R.K.; Krishnamoorthy, T.M.

    1995-01-01

    EDPUFF- Equi Distance Puff is a Gaussian dispersion code in FORTRAN language to model atmospheric dispersion of instantaneous or continuous point source releases. It is designed to incorporate the effect of changing meteorological conditions and source release rates on the spatial distribution profiles and its consequences. Effects of variation of parameters like puff spacing, puff packing, averaging schemes are discussed and the choice of the best values for minimum errors and minimum computer CPU time are identified. The code calculates the doses to individual receptors as well as average doses for population zones from internal and external routes over the area of interest. Internal dose computations are made for inhalation and ingestion pathways while the doses from external route consists of cloud doses and doses from surface deposited activity. It computes inhalation and ingestion dose (milk route only) for critical group (1 yr old child). In case of population zones it finds out maximum possible doses in a given area along with the average doses discussed above. Report gives the doses from various pathways for unit release of fixed duration. (author). 7 refs., figs., 7 appendixes

  4. An unconventional adaptation of a classical Gaussian plume dispersion scheme for the fast assessment of external irradiation from a radioactive cloud

    Science.gov (United States)

    Pecha, Petr; Pechova, Emilie

    2014-06-01

    This article focuses on derivation of an effective algorithm for the fast estimation of cloudshine doses/dose rates induced by a large mixture of radionuclides discharged into the atmosphere. A certain special modification of the classical Gaussian plume approach is proposed for approximation of the near-field dispersion problem. Specifically, the accidental radioactivity release is subdivided into consecutive one-hour Gaussian segments, each driven by a short-term meteorological forecast for the respective hours. Determination of the physical quantity of photon fluence rate from an ambient cloud irradiation is coupled to a special decomposition of the Gaussian plume shape into the equivalent virtual elliptic disks. It facilitates solution of the formerly used time-consuming 3-D integration and provides advantages with regard to acceleration of the computational process on a local scale. An optimal choice of integration limit is adopted on the basis of the mean free path of γ-photons in the air. An efficient approach is introduced for treatment of a wide range of energetic spectrum of the emitted photons when the usual multi-nuclide approach is replaced by a new multi-group scheme. The algorithm is capable of generating the radiological responses in a large net of spatial nodes. It predetermines the proposed procedure such as a proper tool for online data assimilation analysis in the near-field areas. A specific technique for numerical integration is verified on the basis of comparison with a partial analytical solution. Convergence of the finite cloud approximation to the tabulated semi-infinite cloud values for dose conversion factors was validated.

  5. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong

    2017-07-09

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  6. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  7. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    Science.gov (United States)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  8. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  9. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    Science.gov (United States)

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  10. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  11. MCEM algorithm for the log-Gaussian Cox process

    OpenAIRE

    Delmas, Celine; Dubois-Peyrard, Nathalie; Sabbadin, Regis

    2014-01-01

    Log-Gaussian Cox processes are an important class of models for aggregated point patterns. They have been largely used in spatial epidemiology (Diggle et al., 2005), in agronomy (Bourgeois et al., 2012), in forestry (Moller et al.), in ecology (sightings of wild animals) or in environmental sciences (radioactivity counts). A log-Gaussian Cox process is a Poisson process with a stochastic intensity depending on a Gaussian random eld. We consider the case where this Gaussian random eld is ...

  12. Computer Simulation for Dispersion of Air Pollution Released from a Line Source According to Gaussian Model

    International Nuclear Information System (INIS)

    Emad, A.A.; El Shazly, S.M.; Kassem, Kh.O.

    2010-01-01

    A line source model, developed in laboratory of environmental physics, faculty of science at Qena, Egypt is proposed to describe the downwind dispersion of pollutants near roadways, at different cities in Egypt. The model is based on the Gaussian plume methodology and is used to predict air pollutants' concentrations near roadways. In this direction, simple software has been presented in this paper, developed by authors, adopted completely Graphical User Interface (GUI) technique for operating in various windows-based microcomputers. The software interface and code have been designed by Microsoft Visual basic 6.0 based on the Gaussian diffusion equation. This software is developed to predict concentrations of gaseous pollutants (eg. CO, SO 2 , NO 2 and particulates) at a user specified receptor grid

  13. Generalized algorithm for control of numerical dispersion in explicit time-domain electromagnetic simulations

    Directory of Open Access Journals (Sweden)

    Benjamin M. Cowan

    2013-04-01

    Full Text Available We describe a modification to the finite-difference time-domain algorithm for electromagnetics on a Cartesian grid which eliminates numerical dispersion error in vacuum for waves propagating along a grid axis. We provide details of the algorithm, which generalizes previous work by allowing 3D operation with a wide choice of aspect ratio, and give conditions to eliminate dispersive errors along one or more of the coordinate axes. We discuss the algorithm in the context of laser-plasma acceleration simulation, showing significant reduction—up to a factor of 280, at a plasma density of 10^{23}  m^{-3}—of the dispersion error of a linear laser pulse in a plasma channel. We then compare the new algorithm with the standard electromagnetic update for laser-plasma accelerator stage simulations, demonstrating that by controlling numerical dispersion, the new algorithm allows more accurate simulation than is otherwise obtained. We also show that the algorithm can be used to overcome the critical but difficult challenge of consistent initialization of a relativistic particle beam and its fields in an accelerator simulation.

  14. Approximation problems with the divergence criterion for Gaussian variablesand Gaussian processes

    NARCIS (Netherlands)

    A.A. Stoorvogel; J.H. van Schuppen (Jan)

    1996-01-01

    textabstractSystem identification for stationary Gaussian processes includes an approximation problem. Currently the subspace algorithm for this problem enjoys much attention. This algorithm is based on a transformation of a finite time series to canonical variable form followed by a truncation.

  15. Gaussian capacity of the quantum bosonic memory channel with additive correlated Gaussian noise

    International Nuclear Information System (INIS)

    Schaefer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.

    2011-01-01

    We present an algorithm for calculation of the Gaussian classical capacity of a quantum bosonic memory channel with additive Gaussian noise. The algorithm, restricted to Gaussian input states, is applicable to all channels with noise correlations obeying certain conditions and works in the full input energy domain, beyond previous treatments of this problem. As an illustration, we study the optimal input states and capacity of a quantum memory channel with Gauss-Markov noise [J. Schaefer, Phys. Rev. A 80, 062313 (2009)]. We evaluate the enhancement of the transmission rate when using these optimal entangled input states by comparison with a product coherent-state encoding and find out that such a simple coherent-state encoding achieves not less than 90% of the capacity.

  16. Hemodynamic segmentation of brain perfusion images with delay and dispersion effects using an expectation-maximization algorithm.

    Directory of Open Access Journals (Sweden)

    Chia-Feng Lu

    Full Text Available Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified.

  17. A Gaussian process and derivative spectral-based algorithm for red blood cell segmentation

    Science.gov (United States)

    Xue, Yingying; Wang, Jianbiao; Zhou, Mei; Hou, Xiyue; Li, Qingli; Liu, Hongying; Wang, Yiting

    2017-07-01

    As an imaging technology used in remote sensing, hyperspectral imaging can provide more information than traditional optical imaging of blood cells. In this paper, an AOTF based microscopic hyperspectral imaging system is used to capture hyperspectral images of blood cells. In order to achieve the segmentation of red blood cells, Gaussian process using squared exponential kernel function is applied first after the data preprocessing to make the preliminary segmentation. The derivative spectrum with spectral angle mapping algorithm is then applied to the original image to segment the boundary of cells, and using the boundary to cut out cells obtained from the Gaussian process to separated adjacent cells. Then the morphological processing method including closing, erosion and dilation is applied so as to keep adjacent cells apart, and by applying median filtering to remove noise points and filling holes inside the cell, the final segmentation result can be obtained. The experimental results show that this method appears better segmentation effect on human red blood cells.

  18. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    Science.gov (United States)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  19. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  20. Integrating a street-canyon model with a regional Gaussian dispersion model for improved characterisation of near-road air pollution

    Science.gov (United States)

    Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne

    2017-03-01

    The development and use of dispersion models that simulate traffic-related air pollution in urban areas has risen significantly in support of air pollution exposure research. In order to accurately estimate population exposure, it is important to generate concentration surfaces that take into account near-road concentrations as well as the transport of pollutants throughout an urban region. In this paper, an integrated modelling chain was developed to simulate ambient Nitrogen Dioxide (NO2) in a dense urban neighbourhood while taking into account traffic emissions, the regional background, and the transport of pollutants within the urban canopy. For this purpose, we developed a hybrid configuration including 1) a street canyon model, which simulates pollutant transfer along streets and intersections, taking into account the geometry of buildings and other obstacles, and 2) a Gaussian puff model, which resolves the transport of contaminants at the top of the urban canopy and accounts for regional meteorology. Each dispersion model was validated against measured concentrations and compared against the hybrid configuration. Our results demonstrate that the hybrid approach significantly improves the output of each model on its own. An underestimation appears clearly for the Gaussian model and street-canyon model compared to observed data. This is due to ignoring the building effect by the Gaussian model and undermining the contribution of other roads by the canyon model. The hybrid approach reduced the RMSE (of observed vs. predicted concentrations) by 16%-25% compared to each model on its own, and increased FAC2 (fraction of predictions within a factor of two of the observations) by 10%-34%.

  1. Assimilating concentration observations for transport and dispersion modeling in a meandering wind field

    Science.gov (United States)

    Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.

    Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.

  2. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models.

    Science.gov (United States)

    Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L

    2016-02-07

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial

  3. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models

    International Nuclear Information System (INIS)

    Schellenberg, Graham; Goertzen, Andrew L; Stortz, Greg

    2016-01-01

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x–y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5–82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial

  4. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models

    Science.gov (United States)

    Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.

    2016-02-01

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial

  5. Fixed-Point Algorithms for the Blind Separation of Arbitrary Complex-Valued Non-Gaussian Signal Mixtures

    Directory of Open Access Journals (Sweden)

    Douglas Scott C

    2007-01-01

    Full Text Available We derive new fixed-point algorithms for the blind separation of complex-valued mixtures of independent, noncircularly symmetric, and non-Gaussian source signals. Leveraging recently developed results on the separability of complex-valued signal mixtures, we systematically construct iterative procedures on a kurtosis-based contrast whose evolutionary characteristics are identical to those of the FastICA algorithm of Hyvarinen and Oja in the real-valued mixture case. Thus, our methods inherit the fast convergence properties, computational simplicity, and ease of use of the FastICA algorithm while at the same time extending this class of techniques to complex signal mixtures. For extracting multiple sources, symmetric and asymmetric signal deflation procedures can be employed. Simulations for both noiseless and noisy mixtures indicate that the proposed algorithms have superior finite-sample performance in data-starved scenarios as compared to existing complex ICA methods while performing about as well as the best of these techniques for larger data-record lengths.

  6. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  7. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ng, Tien Khee; Ooi, Boon S.

    2013-01-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  8. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-03-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  9. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    Science.gov (United States)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  10. Making tensor factorizations robust to non-gaussian noise.

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Eric C. (Rice University, Houston, TX); Kolda, Tamara Gibson

    2011-03-01

    Tensors are multi-way arrays, and the CANDECOMP/PARAFAC (CP) tensor factorization has found application in many different domains. The CP model is typically fit using a least squares objective function, which is a maximum likelihood estimate under the assumption of independent and identically distributed (i.i.d.) Gaussian noise. We demonstrate that this loss function can be highly sensitive to non-Gaussian noise. Therefore, we propose a loss function based on the 1-norm because it can accommodate both Gaussian and grossly non-Gaussian perturbations. We also present an alternating majorization-minimization (MM) algorithm for fitting a CP model using our proposed loss function (CPAL1) and compare its performance to the workhorse algorithm for fitting CP models, CP alternating least squares (CPALS).

  11. A Low Delay and Fast Converging Improved Proportionate Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Benesty Jacob

    2007-01-01

    Full Text Available A sparse system identification algorithm for network echo cancellation is presented. This new approach exploits both the fast convergence of the improved proportionate normalized least mean square (IPNLMS algorithm and the efficient implementation of the multidelay adaptive filtering (MDF algorithm inheriting the beneficial properties of both. The proposed IPMDF algorithm is evaluated using impulse responses with various degrees of sparseness. Simulation results are also presented for both speech and white Gaussian noise input sequences. It has been shown that the IPMDF algorithm outperforms the MDF and IPNLMS algorithms for both sparse and dispersive echo path impulse responses. Computational complexity of the proposed algorithm is also discussed.

  12. Evolution of super-Gaussian pulses in a nonlinear optical fiber

    Science.gov (United States)

    Bugay, Aleksandr N.; Khalyapin, Vyacheslav A.

    2018-04-01

    An analytic and numerical study is carried out of the dynamics of parameters of a super-Gaussian pulse whose spectrum can fit both in the region of normal and anomalous dispersion of the group velocity. An analytical solution is found for the parameter characterizing the evolution of the degree of the super-Gaussian momentum. The loss of profile rectangularity is shown to be much faster than the pulse dispersion broadening, and corresponding characteristic length is determined by explicit formula.

  13. A simplified algorithm for measuring erythrocyte deformability dispersion by laser ektacytometry

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, S Yu; Yurchuk, Yu S [Department of Physics, M.V. Lomonosov Moscow State University (Russian Federation)

    2015-08-31

    The possibility of measuring the dispersion of red blood cell deformability by laser diffractometry in shear flow (ektacytometry) is analysed theoretically. A diffraction pattern parameter is found, which is sensitive to the dispersion of erythrocyte deformability and to a lesser extent – to such parameters as the level of the scattered light intensity, the shape of red blood cells, the concentration of red blood cells in the suspension, the geometric dimensions of the experimental setup, etc. A new algorithm is proposed for measuring erythrocyte deformability dispersion by using data of laser ektacytometry. (laser applications in medicine)

  14. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  15. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  16. CVFEM for Multiphase Flow with Disperse and Interface Tracking, and Algorithms Performances

    Directory of Open Access Journals (Sweden)

    M. Milanez

    2015-12-01

    Full Text Available A Control-Volume Finite-Element Method (CVFEM is newly formulated within Eulerian and spatial averaging frameworks for effective simulation of disperse transport, deposit distribution and interface tracking. Their algorithms are implemented alongside an existing continuous phase algorithm. Flow terms are newly implemented for a control volume (CV fixed in a space, and the CVs' equations are assembled based on a finite element method (FEM. Upon impacting stationary and moving boundaries, the disperse phase changes its phase and the solver triggers identification of CVs with excess deposit and their neighboring CVs for its accommodation in front of an interface. The solver then updates boundary conditions on the moving interface as well as domain conditions on the accumulating deposit. Corroboration of the algorithms' performances is conducted on illustrative simulations with novel and existing Eulerian and Lagrangian solutions, such as (- other, i. e. external methods with analytical and physical experimental formulations, and (- characteristics internal to CVFEM.

  17. Optimal multigrid algorithms for the massive Gaussian model and path integrals

    International Nuclear Information System (INIS)

    Brandt, A.; Galun, M.

    1996-01-01

    Multigrid algorithms are presented which, in addition to eliminating the critical slowing down, can also eliminate the open-quotes volume factorclose quotes. The elimination of the volume factor removes the need to produce many independent fine-grid configurations for averaging out their statistical deviations, by averaging over the many samples produced on coarse grids during the multigrid cycle. Thermodynamic limits of observables can be calculated to relative accuracy var-epsilon r in just O(var-epsilon r -2 ) computer operations, where var-epsilon r is the error relative to the standard deviation of the observable. In this paper, we describe in detail the calculation of the susceptibility in the one-dimensional massive Gaussian model, which is also a simple example of path integrals. Numerical experiments show that the susceptibility can be calculated to relative accuracy var-epsilon r in about 8 var-epsilon r -2 random number generations, independent of the mass size

  18. Evaluation of regional and local atmospheric dispersion models for the analysis of traffic-related air pollution in urban areas

    Science.gov (United States)

    Fallah-Shorshani, Masoud; Shekarrizfard, Maryam; Hatzopoulou, Marianne

    2017-10-01

    Dispersion of road transport emissions in urban metropolitan areas is typically simulated using Gaussian models that ignore the turbulence and drag induced by buildings, which are especially relevant for areas with dense downtown cores. To consider the effect of buildings, street canyon models are used but often at the level of single urban corridors and small road networks. In this paper, we compare and validate two dispersion models with widely varying algorithms, across a modelling domain consisting of the City of Montreal, Canada accounting for emissions of more 40,000 roads. The first dispersion model is based on flow decomposition into the urban canopy sub-flow as well as overlying airflow. It takes into account the specific height and geometry of buildings along each road. The second model is a Gaussian puff dispersion model, which handles complex terrain and incorporates three-dimensional meteorology, but accounts for buildings only through variations in the initial vertical mixing coefficient. Validation against surface observations indicated that both models under-predicted measured concentrations. Average weekly exposure surfaces derived from both models were found to be reasonably correlated (r = 0.8) although the Gaussian dispersion model tended to underestimate concentrations around the roadways compared to the street canyon model. In addition, both models were used to estimate exposures of a representative sample of the Montreal population composed of 1319 individuals. Large differences were noted whereby exposures derived from the Gaussian puff model were significantly lower than exposures derived from the street canyon model, an expected result considering the concentration of population around roadways. These differences have large implications for the analyses of health effects associated with NO2 exposure.

  19. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm.

    Science.gov (United States)

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm

    Science.gov (United States)

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.

  1. PARTRACK - A particle tracking algorithm for transport and dispersion of solutes in a sparsely fractured rock

    International Nuclear Information System (INIS)

    Svensson, Urban

    2001-04-01

    A particle tracking algorithm, PARTRACK, that simulates transport and dispersion in a sparsely fractured rock is described. The main novel feature of the algorithm is the introduction of multiple particle states. It is demonstrated that the introduction of this feature allows for the simultaneous simulation of Taylor dispersion, sorption and matrix diffusion. A number of test cases are used to verify and demonstrate the features of PARTRACK. It is shown that PARTRACK can simulate the following processes, believed to be important for the problem addressed: the split up of a tracer cloud at a fracture intersection, channeling in a fracture plane, Taylor dispersion and matrix diffusion and sorption. From the results of the test cases, it is concluded that PARTRACK is an adequate framework for simulation of transport and dispersion of a solute in a sparsely fractured rock

  2. Application of Gaussian cubature to model two-dimensional population balances

    Directory of Open Access Journals (Sweden)

    Bałdyga Jerzy

    2017-09-01

    Full Text Available In many systems of engineering interest the moment transformation of population balance is applied. One of the methods to solve the transformed population balance equations is the quadrature method of moments. It is based on the approximation of the density function in the source term by the Gaussian quadrature so that it preserves the moments of the original distribution. In this work we propose another method to be applied to the multivariate population problem in chemical engineering, namely a Gaussian cubature (GC technique that applies linear programming for the approximation of the multivariate distribution. Examples of the application of the Gaussian cubature (GC are presented for four processes typical for chemical engineering applications. The first and second ones are devoted to crystallization modeling with direction-dependent two-dimensional and three-dimensional growth rates, the third one represents drop dispersion accompanied by mass transfer in liquid-liquid dispersions and finally the fourth case regards the aggregation and sintering of particle populations.

  3. Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes

    Science.gov (United States)

    Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.

    2013-01-01

    Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.

  4. The Dispersal and Persistence of Invasive Marine Species

    Science.gov (United States)

    Glick, E. R.; Pringle, J.

    2007-12-01

    The spread of invasive marine species is a continuing problem throughout the world, though not entirely understood. Why do some species invade more easily than the rest? How are the range limits of these species set? Recent research (Byers & Pringle 2006, Pringle & Wares 2007) has produced retention criteria that determine whether a coastal species with a benthic adult stage and planktonic larvae can be retained within its range and invade in the direction opposite that of the mean current experienced by the larvae (i.e. upstream). These results however, are only accurate for Gaussian dispersal kernels. For kernels whose kurtosis differs from a Gaussian's, the retention criteria becomes increasingly inaccurate as the mean current increases. Using recent results of Lutscher (2006), we find an improved retention criterion which is much more accurate for non- Gaussian dispersal kernels. The importance of considering non-Gaussian kernels is illustrated for a number of commonly used dispersal kernels, and the relevance of these calculations is illustrated by considering the northward limit of invasion of Hemigrapsus sanguineus, an important invader in the Gulf of Maine.

  5. Algorithms for Rapidly Dispersing Robot Swarms in Unknown Environments

    OpenAIRE

    Hsiang, Tien-Ruey; Arkin, Esther M.; Bender, Michael; Fekete, Sandor P.; Mitchell, Joseph S. B.

    2002-01-01

    We develop and analyze algorithms for dispersing a swarm of primitive robots in an unknown environment, R. The primary objective is to minimize the makespan, that is, the time to fill the entire region. An environment is composed of pixels that form a connected subset of the integer grid. There is at most one robot per pixel and robots move horizontally or vertically at unit speed. Robots enter R by means of k>=1 door pixels Robots are primitive finite automata, only having local communicatio...

  6. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    Science.gov (United States)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  7. Data Assimilation in Air Contaminant Dispersion Using a Particle Filter and Expectation-Maximization Algorithm

    Directory of Open Access Journals (Sweden)

    Rongxiao Wang

    2017-09-01

    Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.

  8. Gaussian statistics for palaeomagnetic vectors

    Science.gov (United States)

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to

  9. Gaussian statistics for palaeomagnetic vectors

    Science.gov (United States)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to

  10. Reconstruction of electrons with the Gaussian-sum filter in the CMS tracker at the LHC

    International Nuclear Information System (INIS)

    Adam, W; Fruehwirth, R; Strandlie, A; Todorov, T

    2005-01-01

    The bremsstrahlung energy loss distribution of electrons propagating in matter is highly non-Gaussian. Because the Kalman filter relies solely on Gaussian probability density functions, it is not necessarily the optimal reconstruction algorithm for electron tracks. A Gaussian-sum filter (GSF) algorithm for electron reconstruction in the CMS tracker has therefore been developed and implemented. The basic idea is to model the bremsstrahlung energy loss distribution by a Gaussian mixture rather than by a single Gaussian. It is shown that the GSF is able to improve the momentum resolution of electrons compared to the standard Kalman filter. The momentum resolution and the quality of the error estimate are studied both with a fast simulation, modelling the radiative energy loss in a simplified detector, and the full CMS tracker simulation. (research note from collaboration)

  11. Reconstruction of Electrons with the Gaussian-Sum Filter in the CMS Tracker at the LHC

    CERN Document Server

    Adam, Wolfgang; Strandlie, Are; Todor, T

    2005-01-01

    The bremsstrahlung energy loss distribution of electrons propagating in matter is highly non-Gaussian. Because the Kalman filter relies solely on Gaussian probability density functions, it is not necessarily the optimal reconstruction algorithm for electron tracks. A Gaussian-sum filter (GSF) algorithm for electron reconstruction in the CMS tracker has therefore been developed and implemented. The basic idea is to model the bremsstrahlung energy loss distribution by a Gaussian mixture rather than by a single Gaussian. It is shown that the GSF is able to improve the momentum resolution of electrons compared to the standard Kalman filter. The momentum resolution and the quality of he error estimate are studied both with a fast simulation, modelling the radiative energy loss in a simplified detector, and the full CMS tracker simulation.

  12. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  13. First and second derivatives of two electron integrals over Cartesian Gaussians using Rys polynomials

    International Nuclear Information System (INIS)

    Schlegel, H.B.; Binkley, J.S.; Pople, J.A.

    1984-01-01

    Formulas are developed for the first and second derivatives of two electron integrals over Cartesian Gaussians. Integrals and integral derivatives are evaluated by the Rys polynomial method. Higher angular momentum functions are not used to calculate the integral derivatives; instead the integral formulas are differentiated directly to produce compact and efficient expressions for the integral derivatives. The use of this algorithm in the ab initio molecular orbital programs gaussIan 80 and gaussIan 82 is discussed. Representative timings for some small molecules with several basis sets are presented. This method is compared with previously published algorithms and its computational merits are discussed

  14. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    Science.gov (United States)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  15. Fractional-calculus-based FDTD algorithm for ultrawideband electromagnetic characterization of arbitrary dispersive dielectric materials

    NARCIS (Netherlands)

    Caratelli, Diego; Mescia, Luciano; Bia, Pietro; Stukach, Oleg V.

    2016-01-01

    A novel finite-difference time-domain algorithm for modeling ultrawideband electromagnetic pulse propagation in arbitrary multirelaxed dispersive media is presented. The proposed scheme is based on a general, yet computationally efficient, series representation of the fractional derivative operators

  16. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  17. Scaled unscented transform Gaussian sum filter: Theory and application

    KAUST Repository

    Luo, Xiaodong

    2010-05-01

    In this work we consider the state estimation problem in nonlinear/non-Gaussian systems. We introduce a framework, called the scaled unscented transform Gaussian sum filter (SUT-GSF), which combines two ideas: the scaled unscented Kalman filter (SUKF) based on the concept of scaled unscented transform (SUT) (Julier and Uhlmann (2004) [16]), and the Gaussian mixture model (GMM). The SUT is used to approximate the mean and covariance of a Gaussian random variable which is transformed by a nonlinear function, while the GMM is adopted to approximate the probability density function (pdf) of a random variable through a set of Gaussian distributions. With these two tools, a framework can be set up to assimilate nonlinear systems in a recursive way. Within this framework, one can treat a nonlinear stochastic system as a mixture model of a set of sub-systems, each of which takes the form of a nonlinear system driven by a known Gaussian random process. Then, for each sub-system, one applies the SUKF to estimate the mean and covariance of the underlying Gaussian random variable transformed by the nonlinear governing equations of the sub-system. Incorporating the estimations of the sub-systems into the GMM gives an explicit (approximate) form of the pdf, which can be regarded as a "complete" solution to the state estimation problem, as all of the statistical information of interest can be obtained from the explicit form of the pdf (Arulampalam et al. (2002) [7]). In applications, a potential problem of a Gaussian sum filter is that the number of Gaussian distributions may increase very rapidly. To this end, we also propose an auxiliary algorithm to conduct pdf re-approximation so that the number of Gaussian distributions can be reduced. With the auxiliary algorithm, in principle the SUT-GSF can achieve almost the same computational speed as the SUKF if the SUT-GSF is implemented in parallel. As an example, we will use the SUT-GSF to assimilate a 40-dimensional system due to

  18. Gaussian elimination in split unitary groups with an application to public-key cryptography

    Directory of Open Access Journals (Sweden)

    Ayan Mahalanobis

    2017-07-01

    Full Text Available Gaussian elimination is used in special linear groups to solve the word problem. In this paper, we extend Gaussian elimination to split unitary groups. These algorithms have an application in building a public-key cryptosystem, we demonstrate that.

  19. Information geometry of Gaussian channels

    International Nuclear Information System (INIS)

    Monras, Alex; Illuminati, Fabrizio

    2010-01-01

    We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirable properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).

  20. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  1. Application of decision tree algorithm for identification of rock forming minerals using energy dispersive spectrometry

    Science.gov (United States)

    Akkaş, Efe; Çubukçu, H. Evren; Artuner, Harun

    2014-05-01

    C5.0 Decision Tree algorithm. The predictions of the decision tree classifier, namely the matching of the test data with the appropriate mineral group, yield an overall accuracy of >90%. Besides, the algorithm successfully discriminated some mineral (groups) despite their similar elemental composition such as orthopyroxene ((Mg,Fe)2[SiO6]) and olivine ((Mg,Fe)2[SiO4]). Furthermore, the effects of various operating conditions have been insignificant for the classifier. These results demonstrate that decision tree algorithm stands as an accurate, rapid and automated method for mineral classification/identification. Hence, decision tree algorithm would be a promising component of an expert system focused on real-time, automated mineral identification using energy dispersive spectrometers without being affected from the operating conditions. Keywords: mineral identification, energy dispersive spectrometry, decision tree algorithm.

  2. Gaussian variable neighborhood search for the file transfer scheduling problem

    Directory of Open Access Journals (Sweden)

    Dražić Zorica

    2016-01-01

    Full Text Available This paper presents new modifications of Variable Neighborhood Search approach for solving the file transfer scheduling problem. To obtain better solutions in a small neighborhood of a current solution, we implement two new local search procedures. As Gaussian Variable Neighborhood Search showed promising results when solving continuous optimization problems, its implementation in solving the discrete file transfer scheduling problem is also presented. In order to apply this continuous optimization method to solve the discrete problem, mapping of uncountable set of feasible solutions into a finite set is performed. Both local search modifications gave better results for the large size instances, as well as better average performance for medium and large size instances. One local search modification achieved significant acceleration of the algorithm. The numerical experiments showed that the results obtained by Gaussian modifications are comparable with the results obtained by standard VNS based algorithms, developed for combinatorial optimization. In some cases Gaussian modifications gave even better results. [Projekat Ministarstava nauke Republike Srbije, br. 174010

  3. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  4. Self-assembled structures of Gaussian nematic particles.

    Science.gov (United States)

    Nikoubashman, Arash; Likos, Christos N

    2010-03-17

    We investigate the stable crystalline configurations of a nematic liquid crystal made of soft parallel ellipsoidal particles interacting via a repulsive, anisotropic Gaussian potential. For this purpose, we use genetic algorithms (GA) in order to predict all relevant and possible solid phase candidates into which this fluid can freeze. Subsequently we present and discuss the emerging novel structures and the resulting zero-temperature phase diagram of this system. The latter features a variety of crystalline arrangements, in which the elongated Gaussian particles in general do not align with any one of the high-symmetry crystallographic directions, a compromise arising from the interplay and competition between anisotropic repulsions and crystal ordering. Only at very strong degrees of elongation does a tendency of the Gaussian nematics to align with the longest axis of the elementary unit cell emerge.

  5. Restoration of Static JPEG Images and RGB Video Frames by Means of Nonlinear Filtering in Conditions of Gaussian and Non-Gaussian Noise

    Science.gov (United States)

    Sokolov, R. I.; Abdullin, R. R.

    2017-11-01

    The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.

  6. Accident consequence assessments with different atmospheric dispersion models

    International Nuclear Information System (INIS)

    Panitz, H.J.

    1989-11-01

    An essential aim of the improvements of the new program system UFOMOD for Accident Consequence Assessments (ACAs) was to substitute the straight-line Gaussian plume model conventionally used in ACA models by more realistic atmospheric dispersion models. To identify improved models which can be applied in ACA codes and to quantify the implications of different dispersion models on the results of an ACA, probabilistic comparative calculations with different atmospheric dispersion models have been performed. The study showed that there are trajectory models available which can be applied in ACAs and that they provide more realistic results of ACAs than straight-line Gaussian models. This led to a completely novel concept of atmospheric dispersion modelling in which two different distance ranges of validity are distinguished: the near range of some ten kilometres distance and the adjacent far range which are assigned to respective trajectory models. (orig.) [de

  7. A study of the atmospheric dispersion of a high release of krypton-85 above a complex coastal terrain, comparison with the predictions of Gaussian models (Briggs, Doury, ADMS4).

    Science.gov (United States)

    Leroy, C; Maro, D; Hébert, D; Solier, L; Rozet, M; Le Cavelier, S; Connan, O

    2010-11-01

    Atmospheric releases of krypton-85, from the nuclear fuel reprocessing plant at the AREVA NC facility at La Hague (France), were used to test Gaussian models of dispersion. In 2001-2002, the French Institute for Radiological Protection and Nuclear Safety (IRSN) studied the atmospheric dispersion of 15 releases, using krypton-85 as a tracer for plumes emitted from two 100-m-high stacks. Krypton-85 is a chemically inert radionuclide. Krypton-85 air concentration measurements were performed on the ground in the downwind direction, at distances between 0.36 and 3.3 km from the release, by neutral or slightly unstable atmospheric conditions. The standard deviation for the horizontal dispersion of the plume and the Atmospheric Transfer Coefficient (ATC) were determined from these measurements. The experimental results were compared with calculations using first generation (Doury, Briggs) and second generation (ADMS 4.0) Gaussian models. The ADMS 4.0 model was used in two configurations; one takes account of the effect of the built-up area, and the other the effect of the roughness of the surface on the plume dispersion. Only the Briggs model correctly reproduced the measured values for the width of the plume, whereas the ADMS 4.0 model overestimated it and the Doury model underestimated it. The agreement of the models with measured values of the ATC varied according to distance from the release point. For distances less than 2 km from the release point, the ADMS 4.0 model achieved the best agreement between model and measurement; beyond this distance, the best agreement was achieved by the Briggs and Doury models.

  8. Performance Analysis of a New Coded TH-CDMA Scheme in Dispersive Infrared Channel with Additive Gaussian Noise

    Science.gov (United States)

    Hamdi, Mazda; Kenari, Masoumeh Nasiri

    2013-06-01

    We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.

  9. Coevolutionary particle swarm optimization using Gaussian distribution for solving constrained optimization problems.

    Science.gov (United States)

    Krohling, Renato A; Coelho, Leandro dos Santos

    2006-12-01

    In this correspondence, an approach based on coevolutionary particle swarm optimization to solve constrained optimization problems formulated as min-max problems is presented. In standard or canonical particle swarm optimization (PSO), a uniform probability distribution is used to generate random numbers for the accelerating coefficients of the local and global terms. We propose a Gaussian probability distribution to generate the accelerating coefficients of PSO. Two populations of PSO using Gaussian distribution are used on the optimization algorithm that is tested on a suite of well-known benchmark constrained optimization problems. Results have been compared with the canonical PSO (constriction factor) and with a coevolutionary genetic algorithm. Simulation results show the suitability of the proposed algorithm in terms of effectiveness and robustness.

  10. Nonclassicality by Local Gaussian Unitary Operations for Gaussian States

    Directory of Open Access Journals (Sweden)

    Yangyang Wang

    2018-04-01

    Full Text Available A measure of nonclassicality N in terms of local Gaussian unitary operations for bipartite Gaussian states is introduced. N is a faithful quantum correlation measure for Gaussian states as product states have no such correlation and every non product Gaussian state contains it. For any bipartite Gaussian state ρ A B , we always have 0 ≤ N ( ρ A B < 1 , where the upper bound 1 is sharp. An explicit formula of N for ( 1 + 1 -mode Gaussian states and an estimate of N for ( n + m -mode Gaussian states are presented. A criterion of entanglement is established in terms of this correlation. The quantum correlation N is also compared with entanglement, Gaussian discord and Gaussian geometric discord.

  11. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  12. PSSGP : Program for Simulation of Stationary Gaussian Processes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    This report describes the computer program PSSGP. PSSGP can be used to simulate realizations of stationary Gaussian stochastic processes. The simulation algorithm can be coupled with some applications. One possibility is to use PSSGP to estimate the first-passage density function of a given system...

  13. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  14. Assessment of the GPC Control Quality Using Non–Gaussian Statistical Measures

    Directory of Open Access Journals (Sweden)

    Domański Paweł D.

    2017-06-01

    Full Text Available This paper presents an alternative approach to the task of control performance assessment. Various statistical measures based on Gaussian and non-Gaussian distribution functions are evaluated. The analysis starts with the review of control error histograms followed by their statistical analysis using probability distribution functions. Simulation results obtained for a control system with the generalized predictive controller algorithm are considered. The proposed approach using Cauchy and Lévy α-stable distributions shows robustness against disturbances and enables effective control loop quality evaluation. Tests of the predictive algorithm prove its ability to detect the impact of the main controller parameters, such as the model gain, the dynamics or the prediction horizon.

  15. Optimisation of centroiding algorithms for photon event counting imaging

    International Nuclear Information System (INIS)

    Suhling, K.; Airey, R.W.; Morgan, B.L.

    1999-01-01

    Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs

  16. Outage performance of cognitive radio systems with Improper Gaussian signaling

    KAUST Repository

    Amin, Osama

    2015-06-14

    Improper Gaussian signaling has proved its ability to improve the achievable rate of the systems that suffer from interference compared with proper Gaussian signaling. In this paper, we first study impact of improper Gaussian signaling on the performance of the cognitive radio system by analyzing the outage probability of both the primary user (PU) and the secondary user (SU). We derive exact expression of the SU outage probability and upper and lower bounds for the PU outage probability. Then, we design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the proposed bounds and adaptive algorithms by numerical results.

  17. CityZoom UP (Urban Pollution): a computational tool for the fast generation and setup of urban scenarios for CFD and dispersion modelling simulation

    OpenAIRE

    Grazziotin, Pablo Colossi

    2016-01-01

    This research presents the development of CityZoom UP, the first attempt to extend existing urban planning software in order to assist in modelling urban scenarios and setting up simulation parameters for Gaussian dispersion and CFD models. Based on the previous capabilities and graphic user interfaces of CityZoom to model and validate urban scenarios based on Master Plan regulations, new graphic user interfaces, automatic mesh generation and data conversion algorithms have been created to se...

  18. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  19. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  20. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  1. Dispersion relations for 1D high-gain FELs

    International Nuclear Information System (INIS)

    Webb, S.D.; Litvinenko, V.N.

    2010-01-01

    We present analytical results for the one-dimensional dispersion relation for high-gain FELs. Using kappa-n distributions, we obtain analytical relations between the dispersion relations for various order kappa distributions. Since an exact solution exists for the kappa-1 (Lorentzian) distribution, this provides some insight into the number of modes on the way to the Gaussian distribution.

  2. Color Texture Segmentation by Decomposition of Gaussian Mixture Model

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 287-296 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA MŠk 2C06019 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : texture segmentation * gaussian mixture model * EM algorithm Subject RIV: IN - Informatics, Computer Science Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/grim-color texture segmentation by decomposition of gaussian mixture model.pdf

  3. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  4. Interconversion of pure Gaussian states requiring non-Gaussian operations

    Science.gov (United States)

    Jabbour, Michael G.; García-Patrón, Raúl; Cerf, Nicolas J.

    2015-01-01

    We analyze the conditions under which local operations and classical communication enable entanglement transformations between bipartite pure Gaussian states. A set of necessary and sufficient conditions had been found [G. Giedke et al., Quant. Inf. Comput. 3, 211 (2003)] for the interconversion between such states that is restricted to Gaussian local operations and classical communication. Here, we exploit majorization theory in order to derive more general (sufficient) conditions for the interconversion between bipartite pure Gaussian states that goes beyond Gaussian local operations. While our technique is applicable to an arbitrary number of modes for each party, it allows us to exhibit surprisingly simple examples of 2 ×2 Gaussian states that necessarily require non-Gaussian local operations to be transformed into each other.

  5. Gaussian elimination is not optimal, revisited

    DEFF Research Database (Denmark)

    Macedo, Hugo Daniel

    2016-01-01

    We refactor the universal law for the tensor product to express matrix multiplication as the product . MN of two matrices . M and . N thus making possible to use such matrix product to encode and transform algorithms performing matrix multiplication using techniques from linear algebra. We explore...... the end results are equations involving matrix products, our exposition builds upon previous works on the category of matrices (and the related category of finite vector spaces) which we extend by showing: why the direct sum . (⊕,0) monoid is not closed, a biproduct encoding of Gaussian elimination...... such possibility and show two stepwise refinements transforming the composition . MN into the Naïve and Strassen's matrix multiplication algorithms. The inspection of the stepwise transformation of the composition of matrices . MN into the Naïve matrix multiplication algorithm evidences that the steps...

  6. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    International Nuclear Information System (INIS)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo; Martinet, Philippe

    2008-01-01

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  7. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo [Sungkyunkwan University, Suwon (Korea, Republic of); Martinet, Philippe [Blaise Pascal University, Clermont-Ferrand Cedex (France)

    2008-07-15

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  8. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    Science.gov (United States)

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-05-04

    Particle swarm optimization is a powerful metaheuristic population-based global optimization algorithm. However, when applied to non-separable objective functions its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant particle swarm optimization algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates a superior performance across several nonlinear, multimodal benchmark functions compared to the rotation-invariant Particle Swam Optimization (PSO) algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in ReaxFF-lg reactive force field is carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents a better performance compared to a Genetic Algorithm optimization method in the optimization of a ReaxFF-lg correction model parameters. The computational framework is implemented in a standalone C++ code that allows a straightforward development of ReaxFF reactive force fields.

  9. Secure Degrees of Freedom of the Gaussian Z Channel with Single Antenna

    Directory of Open Access Journals (Sweden)

    Xianzhong XIE

    2014-03-01

    Full Text Available This paper presents the secrecy capacity and the secure degrees of freedom of Gaussian Z channel with single antenna and confidential information. Firstly, we analysis the secrecy capacity and the upper bound of secure degrees of freedom of this channel in theory. Then, we respectively discuss the security pre-coding scheme for real Gaussian channel model and frequency selection channel model. Under the first model, through real interference alignment and cooperative jamming, we obtain the secrecy capacity and secure degrees of freedom, proving that it can reach the upper bound of secure degrees of freedom in theory. While, under the second one, a strong security pre-coding algorithm is proposed, which is based on the fact that sparse matrix has strong hash property. Next, we arrange interference with interference alignment and the receivers process their received signal through zero forcing algorithm. At last, the messages are reconstructed with maximum likelihood decoding, where it shows that the algorithm can asymptotically achieve the optimal secrecy capacity.

  10. Non-gaussian turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hoejstrup, J [NEG Micon Project Development A/S, Randers (Denmark); Hansen, K S [Denmarks Technical Univ., Dept. of Energy Engineering, Lyngby (Denmark); Pedersen, B J [VESTAS Wind Systems A/S, Lem (Denmark); Nielsen, M [Risoe National Lab., Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    The pdf`s of atmospheric turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour is being investigated using data from a large WEB-database in order to quantify the amount of non-Gaussianity. Models for non-Gaussian turbulence have been developed, by which artificial turbulence can be generated with specified distributions, spectra and cross-correlations. The artificial time series will then be used in load models and the resulting loads in the Gaussian and the non-Gaussian cases will be compared. (au)

  11. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  12. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    DEFF Research Database (Denmark)

    Berg, Jacob; Natarajan, Anand; Mann, Jakob

    2016-01-01

    taking into account the safety factor for extreme moments. Other extreme load moments as well as the fatigue loads are not affected because of the use of non-Gaussian turbulent inflow. It is suggested that the turbine thus acts like a low-pass filter that averages out the non-Gaussian behaviour, which......From large-eddy simulations of atmospheric turbulence, a representation of Gaussian turbulence is constructed by randomizing the phases of the individual modes of variability. Time series of Gaussian turbulence are constructed and compared with its non-Gaussian counterpart. Time series from the two...

  13. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  14. E-PLE: an Algorithm for Image Inpainting

    Directory of Open Access Journals (Sweden)

    Yi-Qing Wang

    2013-12-01

    Full Text Available Gaussian mixture is a powerful tool for modeling the patch prior. In this work, a probabilisticview of an existing algorithm piecewise linear estimation (PLE for image inpainting is presentedwhich leads to several theoretical and numerical improvements based on an effective use ofGaussian mixture.

  15. A User-Adaptive Algorithm for Activity Recognition Based on K-Means Clustering, Local Outlier Factor, and Multivariate Gaussian Distribution

    Directory of Open Access Journals (Sweden)

    Shizhen Zhao

    2018-06-01

    Full Text Available Mobile activity recognition is significant to the development of human-centric pervasive applications including elderly care, personalized recommendations, etc. Nevertheless, the distribution of inertial sensor data can be influenced to a great extent by varying users. This means that the performance of an activity recognition classifier trained by one user’s dataset will degenerate when transferred to others. In this study, we focus on building a personalized classifier to detect four categories of human activities: light intensity activity, moderate intensity activity, vigorous intensity activity, and fall. In order to solve the problem caused by different distributions of inertial sensor signals, a user-adaptive algorithm based on K-Means clustering, local outlier factor (LOF, and multivariate Gaussian distribution (MGD is proposed. To automatically cluster and annotate a specific user’s activity data, an improved K-Means algorithm with a novel initialization method is designed. By quantifying the samples’ informative degree in a labeled individual dataset, the most profitable samples can be selected for activity recognition model adaption. Through experiments, we conclude that our proposed models can adapt to new users with good recognition performance.

  16. Gaussian Plume Model Parameters for Ground-Level and Elevated Sources Derived from the Atmospheric Diffusion Equation in the Neutral and Stable Conditions

    International Nuclear Information System (INIS)

    Essa, K.S.M.

    2009-01-01

    The analytical solution of the atmospheric diffusion equation for a point source gives the ground-level concentration profiles. It depends on the wind speed ua nd vertical dispersion coefficient σ z expressed by Pasquill power laws. Both σ z and u are functions of downwind distance, stability and source elevation, while for the ground-level emission u is constant. In the neutral and stable conditions, the Gaussian plume model and finite difference numerical methods with wind speed in power law and the vertical dispersion coefficient in exponential law are estimated. This work shows that the estimated ground-level concentrations of the Gaussian model for high-level source and numerical finite difference method are very match fit to the observed ground-level concentrations of the Gaussian model

  17. Modelling of atmospheric dispersion in a complex medium and associated uncertainties

    International Nuclear Information System (INIS)

    Demael, Emmanuel

    2007-01-01

    This research thesis addresses the study of the digital modelling of atmospheric dispersions. It aimed at validating the Mercure-Saturne tool used with a RANS (Reynolds Averaged Navier-Stokes) approach within the frame of an impact study or of an accidental scenario on a nuclear site while taking buildings and ground relief into account, at comparing the Mercure-Saturne model with a more simple and less costly (in terms of computation time) Gaussian tool (the ADMS software, Atmospheric Dispersion Modelling System), and at quantifying uncertainties related to the use of the Mercure-Saturne model. The first part introduces theoretical elements of atmosphere physics and of the atmospheric dispersion in a boundary layer, presents the Gaussian model and the Mercure-Saturne tool and its associated RANS approach. The second part reports the comparison of the Mercure-Saturne model with conventional Gaussian plume models. The third part reports the study of the atmospheric flow and dispersion about the Bugey nuclear site, based on a study performed in a wind tunnel. The fourth part reports the same kind of study for the Flamanville site. The fifth part reports the use of different approaches for the study of uncertainties in the case of the Bugey site: application of the Morris method (a screening method), and of the Monte Carlo method (quantification of the uncertainty and of the sensitivity of each uncertainty source) [fr

  18. Random noise suppression of seismic data using non-local Bayes algorithm

    Science.gov (United States)

    Chang, De-Kuan; Yang, Wu-Yang; Wang, Yi-Hui; Yang, Qing; Wei, Xin-Jian; Feng, Xiao-Ying

    2018-02-01

    For random noise suppression of seismic data, we present a non-local Bayes (NL-Bayes) filtering algorithm. The NL-Bayes algorithm uses the Gaussian model instead of the weighted average of all similar patches in the NL-means algorithm to reduce the fuzzy of structural details, thereby improving the denoising performance. In the denoising process of seismic data, the size and the number of patches in the Gaussian model are adaptively calculated according to the standard deviation of noise. The NL-Bayes algorithm requires two iterations to complete seismic data denoising, but the second iteration makes use of denoised seismic data from the first iteration to calculate the better mean and covariance of the patch Gaussian model for improving the similarity of patches and achieving the purpose of denoising. Tests with synthetic and real data sets demonstrate that the NL-Bayes algorithm can effectively improve the SNR and preserve the fidelity of seismic data.

  19. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  20. 40Gbit/s MDM-WDM Laguerre-Gaussian Mode with Equalization for Multimode Fiber in Access Networks

    Science.gov (United States)

    Fazea, Yousef; Amphawan, Angela

    2018-04-01

    Modal dispersion is seen as the primary impairment for multimode fiber. Mode division multiplexing (MDM) is a promising technology that has been realized as a favorable technology for considerably upsurges the capacity and distance of multimode fiber in conjunction with Wavelength Division Multiplexing (WDM) for fiber-to-the-home. This paper reveals the importance of an equalization technique in conjunction with controlling the modes spacing of mode division multiplexing-wavelength division multiplexing of Laguerre-Gaussian modes to alleviate modal dispersion for multimode fiber. The effects of channel spacing of 20 channels MDM-WDM were examined through controlling the azimuthal mode number and the radial mode number of Laguerre-Gaussian modes. A data rate of 40Gbit/s was achieved for a distance of 1,500 m for MDM-WDM.

  1. Gaussian entanglement revisited

    Science.gov (United States)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  2. Adaptive Electronic Dispersion Compensator for Chromatic and Polarization-Mode Dispersions in Optical Communication Systems

    OpenAIRE

    Koc Ut-Va

    2005-01-01

    The widely-used LMS algorithm for coefficient updates in adaptive (feedforward/decision-feedback) equalizers is found to be suboptimal for ASE-dominant systems but various coefficient-dithering approaches suffer from slow adaptation rate without guarantee of convergence. In view of the non-Gaussian nature of optical noise after the square-law optoelectronic conversion, we propose to apply the higher-order least-mean 2 th-order (LMN) algorithms resulting in OSNR penalty which is 1.5–2 d...

  3. Improved atmospheric dispersion modelling in the new program system UFOMOD for accident consequence assessments

    International Nuclear Information System (INIS)

    Panitz, H.J.

    1988-01-01

    An essential aim of the improvements of the new program system UFOMOD for Accident Consequence Assessments (ACAs) was to substitute the straightline Gaussian plume model conventionally used in ACA models by more realistic atmospheric dispersion models. To identify improved models which can be applied in ACA codes and to quantify the implications of different concepts of dispersion modelling on the results of an ACA, probabilistic comparative calculations with different atmospheric dispersion models have been carried out. The study showed that there are trajectory models available which can be applied in ACAs and that these trajectory models provide more realistic results of ACAs than straight-line Gaussian models. This led to a completly novel concept of atmospheric dispersion modelling which distinguish between two different distance ranges of validity: the near range ( 50 km). The two ranges are assigned to respective trajectory models

  4. Gaussian measures of entanglement versus negativities: Ordering of two-mode Gaussian states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-01-01

    We study the entanglement of general (pure or mixed) two-mode Gaussian states of continuous-variable systems by comparing the two available classes of computable measures of entanglement: entropy-inspired Gaussian convex-roof measures and positive partial transposition-inspired measures (negativity and logarithmic negativity). We first review the formalism of Gaussian measures of entanglement, adopting the framework introduced in M. M. Wolf et al., Phys. Rev. A 69, 052320 (2004), where the Gaussian entanglement of formation was defined. We compute explicitly Gaussian measures of entanglement for two important families of nonsymmetric two-mode Gaussian state: namely, the states of extremal (maximal and minimal) negativities at fixed global and local purities, introduced in G. Adesso et al., Phys. Rev. Lett. 92, 087901 (2004). This analysis allows us to compare the different orderings induced on the set of entangled two-mode Gaussian states by the negativities and by the Gaussian measures of entanglement. We find that in a certain range of values of the global and local purities (characterizing the covariance matrix of the corresponding extremal states), states of minimum negativity can have more Gaussian entanglement of formation than states of maximum negativity. Consequently, Gaussian measures and negativities are definitely inequivalent measures of entanglement on nonsymmetric two-mode Gaussian states, even when restricted to a class of extremal states. On the other hand, the two families of entanglement measures are completely equivalent on symmetric states, for which the Gaussian entanglement of formation coincides with the true entanglement of formation. Finally, we show that the inequivalence between the two families of continuous-variable entanglement measures is somehow limited. Namely, we rigorously prove that, at fixed negativities, the Gaussian measures of entanglement are bounded from below. Moreover, we provide some strong evidence suggesting that they

  5. Discrete dispersion models and their Tweedie asymptotics

    DEFF Research Database (Denmark)

    Jørgensen, Bent; Kokonendji, Célestin C.

    2016-01-01

    The paper introduce a class of two-parameter discrete dispersion models, obtained by combining convolution with a factorial tilting operation, similar to exponential dispersion models which combine convolution and exponential tilting. The equidispersed Poisson model has a special place in this ap......The paper introduce a class of two-parameter discrete dispersion models, obtained by combining convolution with a factorial tilting operation, similar to exponential dispersion models which combine convolution and exponential tilting. The equidispersed Poisson model has a special place...... in this approach, whereas several overdispersed discrete distributions, such as the Neyman Type A, Pólya-Aeppli, negative binomial and Poisson-inverse Gaussian, turn out to be Poisson-Tweedie factorial dispersion models with power dispersion functions, analogous to ordinary Tweedie exponential dispersion models...... with power variance functions. Using the factorial cumulant generating function as tool, we introduce a dilation operation as a discrete analogue of scaling, generalizing binomial thinning. The Poisson-Tweedie factorial dispersion models are closed under dilation, which in turn leads to a Poisson...

  6. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    Science.gov (United States)

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  7. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  8. Turbo Equalization Using Partial Gaussian Approximation

    DEFF Research Database (Denmark)

    Zhang, Chuanzong; Wang, Zhongyong; Manchón, Carles Navarro

    2016-01-01

    This letter deals with turbo equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation propagation rule to convert messages passed from the demodulator and decoder to the equalizer and computes messages...... returned by the equalizer by using a partial Gaussian approximation (PGA). We exploit the specific structure of the ISI channel model to compute the latter messages from the beliefs obtained using a Kalman smoother/equalizer. Doing so leads to a significant complexity reduction compared to the initial PGA...

  9. Using Geometrical Properties for Fast Indexation of Gaussian Vector Quantizers

    Directory of Open Access Journals (Sweden)

    Vassilieva EA

    2007-01-01

    Full Text Available Vector quantization is a classical method used in mobile communications. Each sequence of samples of the discretized vocal signal is associated to the closest -dimensional codevector of a given set called codebook. Only the binary indices of these codevectors (the codewords are transmitted over the channel. Since channels are generally noisy, the codewords received are often slightly different from the codewords sent. In order to minimize the distortion of the original signal due to this noisy transmission, codevectors indexed by one-bit different codewords should have a small mutual Euclidean distance. This paper is devoted to this problem of index assignment of binary codewords to the codevectors. When the vector quantizer has a Gaussian structure, we show that a fast index assignment algorithm based on simple geometrical and combinatorial considerations can improve the SNR at the receiver by 5dB with respect to a purely random assignment. We also show that in the Gaussian case this algorithm outperforms the classical combinatorial approach in the field.

  10. Spectral dispersion and fringe detection in IOTA

    Science.gov (United States)

    Traub, W. A.; Lacasse, M. G.; Carleton, N. P.

    1990-01-01

    Pupil plane beam combination, spectral dispersion, detection, and fringe tracking are discussed for the IOTA interferometer. A new spectrometer design is presented in which the angular dispersion with respect to wavenumber is nearly constant. The dispersing element is a type of grism, a series combination of grating and prism, in which the constant parts of the dispersion add, but the slopes cancel. This grism is optimized for the display of channelled spectra. The dispersed fringes can be tracked by a matched-filter photon-counting correlator algorithm. This algorithm requires very few arithmetic operations per detected photon, making it well-suited for real-time fringe tracking. The algorithm is able to adapt to different stellar spectral types, intensity levels, and atmospheric time constants. The results of numerical experiments are reported.

  11. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  12. Nonlinear and non-Gaussian Bayesian based handwriting beautification

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2013-03-01

    A framework is proposed in this paper to effectively and efficiently beautify handwriting by means of a novel nonlinear and non-Gaussian Bayesian algorithm. In the proposed framework, format and size of handwriting image are firstly normalized, and then typeface in computer system is applied to optimize vision effect of handwriting. The Bayesian statistics is exploited to characterize the handwriting beautification process as a Bayesian dynamic model. The model parameters to translate, rotate and scale typeface in computer system are controlled by state equation, and the matching optimization between handwriting and transformed typeface is employed by measurement equation. Finally, the new typeface, which is transformed from the original one and gains the best nonlinear and non-Gaussian optimization, is the beautification result of handwriting. Experimental results demonstrate the proposed framework provides a creative handwriting beautification methodology to improve visual acceptance.

  13. Underlay Cognitive Radio Systems with Improper Gaussian Signaling: Outage Performance Analysis

    KAUST Repository

    Amin, Osama

    2016-03-29

    Improper Gaussian signaling has the ability over proper (conventional) Gaussian signaling to improve the achievable rate of systems that suffer from interference. In this paper, we study the impact of using improper Gaussian signaling on the performance limits of the underlay cognitive radio system by analyzing the achievable outage probability of both the primary user (PU) and secondary user (SU). We derive the exact outage probability expression of the SU and construct upper and lower bounds of the PU outage probability which results in formulating an approximate expression of the PU outage probability. This allows us to design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the derived expressions for both the SU and the PU and the corresponding adaptive algorithms by numerical results.

  14. Underlay Cognitive Radio Systems with Improper Gaussian Signaling: Outage Performance Analysis

    KAUST Repository

    Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    Improper Gaussian signaling has the ability over proper (conventional) Gaussian signaling to improve the achievable rate of systems that suffer from interference. In this paper, we study the impact of using improper Gaussian signaling on the performance limits of the underlay cognitive radio system by analyzing the achievable outage probability of both the primary user (PU) and secondary user (SU). We derive the exact outage probability expression of the SU and construct upper and lower bounds of the PU outage probability which results in formulating an approximate expression of the PU outage probability. This allows us to design the SU signal by adjusting its transmitted power and the circularity coefficient to minimize the SU outage probability while maintaining a certain PU quality-of-service. Finally, we evaluate the derived expressions for both the SU and the PU and the corresponding adaptive algorithms by numerical results.

  15. Adaptive Laguerre-Gaussian variant of the Gaussian beam expansion method.

    Science.gov (United States)

    Cagniot, Emmanuel; Fromager, Michael; Ait-Ameur, Kamel

    2009-11-01

    A variant of the Gaussian beam expansion method consists in expanding the Bessel function J0 appearing in the Fresnel-Kirchhoff integral into a finite sum of complex Gaussian functions to derive an analytical expression for a Laguerre-Gaussian beam diffracted through a hard-edge aperture. However, the validity range of the approximation depends on the number of expansion coefficients that are obtained by optimization-computation directly. We propose another solution consisting in expanding J0 onto a set of collimated Laguerre-Gaussian functions whose waist depends on their number and then, depending on its argument, predicting the suitable number of expansion functions to calculate the integral recursively.

  16. LNG vapor dispersion prediction with the DEGADIS dense-gas dispersion model. Topical report, April 1988-July 1990. Documentation

    International Nuclear Information System (INIS)

    Havens, J.; Spicer, T.

    1990-09-01

    The topical report is one of a series on the development of methods for LNG vapor dispersion prediction for regulatory application. The results indicate that the DEGADIS model is superior both phenomenologically and in performance to the Gaussian line source model promulgated in 49 CFR 193 for LNG vapor dispersion simulation. Availability of the DEGADIS model for VAX and IBM-PC formats provides for wider use of the model and greater potential for industry and regulatory acceptance. The acceptance is seen as an important interim objective while research continues on vapor dispersion estimation methods which provide for effects of vapor detention systems, turbulence induced by plant structure, and plant/area topographical features

  17. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer

    2015-12-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  18. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  19. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  20. IMPROVED SIMULATION OF NON-GAUSSIAN TEMPERATURE AND POLARIZATION COSMIC MICROWAVE BACKGROUND MAPS

    International Nuclear Information System (INIS)

    Elsner, Franz; Wandelt, Benjamin D.

    2009-01-01

    We describe an algorithm to generate temperature and polarization maps of the cosmic microwave background (CMB) radiation containing non-Gaussianity of arbitrary local type. We apply an optimized quadrature scheme that allows us to predict and control integration accuracy, speed up the calculations, and reduce memory consumption by an order of magnitude. We generate 1000 non-Gaussian CMB temperature and polarization maps up to a multipole moment of l max = 1024. We validate the method and code using the power spectrum and the fast cubic (bispectrum) estimator and find consistent results. The simulations are provided to the community.

  1. Using Gaussian Process Annealing Particle Filter for 3D Human Tracking

    Directory of Open Access Journals (Sweden)

    Michael Rudzsky

    2008-01-01

    Full Text Available We present an approach for human body parts tracking in 3D with prelearned motion models using multiple cameras. Gaussian process annealing particle filter is proposed for tracking in order to reduce the dimensionality of the problem and to increase the tracker's stability and robustness. Comparing with a regular annealed particle filter-based tracker, we show that our algorithm can track better for low frame rate videos. We also show that our algorithm is capable of recovering after a temporal target loss.

  2. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    Science.gov (United States)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

  3. Resource theory of non-Gaussian operations

    Science.gov (United States)

    Zhuang, Quntao; Shor, Peter W.; Shapiro, Jeffrey H.

    2018-05-01

    Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is nonincreasing under the set of free superoperations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (i) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition, and all Gaussian-dilatable non-Gaussian channels; and (ii) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian dilatable. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.

  4. Neutron study of non-Gaussian self dynamics in liquid parahydrogen

    International Nuclear Information System (INIS)

    Bafile, Ubaldo; Celli, Milva; Colognesi, Daniele; Zoppi, Marco; Guarini, Eleonora; De Francesco, Alessio; Formisano, Ferdinando; Neumann, Martin

    2012-01-01

    A time-honoured approach to single-molecule, or self, dynamics of liquids is based on the so-called Gaussian approximation (GA), where it is assumed that, in the whole dynamical range between hydrodynamic diffusion and free-particle streaming, the motion of a particle is fully determined by a unique function of time directly related to the velocity autocorrelation function. An evident support to the GA is offered by the fact that the approximation becomes exact in both above limit conditions. Yet, experimental inquiries into the presence of non-Gaussian dynamics are very scarce, particularly in liquid parahydrogen in spite of its importance as the prototype of a 'quantum Boltzmann liquid' which has also served as a benchmark for the development of quantum dynamics simulation algorithms. Though experimental evidence of the breakdown of the GA was obtained by some of the authors a few years ago, the localization in Q space of non-Gaussian behaviour was still undetermined, and no quantitative assessment of the effect was ever obtained. These issues have been tackled and solved by a new neutron investigation, which provides the first determination of non-Gaussian behaviour in the framework of the well-known theoretical approach by Rahman, Singwi and Sjölander.

  5. Photonic generation of FCC-compliant UWB pulses based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion

    Science.gov (United States)

    Mu, Hongqian; Wang, Muguang; Tang, Yu; Zhang, Jing; Jian, Shuisheng

    2018-03-01

    A novel scheme for the generation of FCC-compliant UWB pulse is proposed based on modified Gaussian quadruplet and incoherent wavelength-to-time conversion. The modified Gaussian quadruplet is synthesized based on linear sum of a broad Gaussian pulse and two narrow Gaussian pulses with the same pulse-width and amplitude peak. Within specific parameter range, FCC-compliant UWB with spectral power efficiency of higher than 39.9% can be achieved. In order to realize the designed waveform, a UWB generator based on spectral shaping and incoherent wavelength-to-time mapping is proposed. The spectral shaper is composed of a Gaussian filter and a programmable filter. Single-mode fiber functions as both dispersion device and transmission medium. Balanced photodetection is employed to combine linearly the broad Gaussian pulse and two narrow Gaussian pulses, and at same time to suppress pulse pedestals that result in low-frequency components. The proposed UWB generator can be reconfigured for UWB doublet by operating the programmable filter as a single-band Gaussian filter. The feasibility of proposed UWB generator is demonstrated experimentally. Measured UWB pulses match well with simulation results. FCC-compliant quadruplet with 10-dB bandwidth of 6.88-GHz, fractional bandwidth of 106.8% and power efficiency of 51% is achieved.

  6. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  7. Characterization of ultrashort laser pulses employing self-phase modulation dispersion-scan technique

    Science.gov (United States)

    Sharba, A. B.; Chekhlov, O.; Wyatt, A. S.; Pattathil, R.; Borghesi, M.; Sarri, G.

    2018-03-01

    We present a new phase characterization technique for ultrashort laser pulses that employs self-phase modulation (SPM) in the dispersion scan approach. The method can be implemented by recording a set of nonlinearly modulated spectra generated with a set of known chirp values. The unknown phase of the pulse is retrieved by linking the recorded spectra to the initial spectrum of the pulse via a phase function guessed by a function minimization iterative algorithm. This technique has many advantages over the dispersion scan techniques that use frequency conversion processes. Mainly, the use of SPM cancels out the phase and group velocity mismatch errors and dramatically widens the spectral acceptance of the nonlinear medium and the range of working wavelength. The robustness of the technique is demonstrated with smooth and complex phase retrievals using numerical examples. The method is shown to be not affected by the spatial distribution of the beam or the presence of nonlinear absorption process. In addition, we present an efficient method for phase representation based on a summation of a set of Gaussian functions. The independence of the functions from each other prevents phase coupling of any kind and facilitates a flexible phase representation.

  8. State-Space Inference and Learning with Gaussian Processes

    OpenAIRE

    Turner, R; Deisenroth, MP; Rasmussen, CE

    2010-01-01

    18.10.13 KB. Ok to add author version to spiral, authors hold copyright. State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. C...

  9. High resolution electron exit wave reconstruction from a diffraction pattern using Gaussian basis decomposition

    International Nuclear Information System (INIS)

    Borisenko, Konstantin B; Kirkland, Angus I

    2014-01-01

    We describe an algorithm to reconstruct the electron exit wave of a weak-phase object from single diffraction pattern. The algorithm uses analytic formulations describing the diffraction intensities through a representation of the object exit wave in a Gaussian basis. The reconstruction is achieved by solving an overdetermined system of non-linear equations using an easily parallelisable global multi-start search with Levenberg-Marquard optimisation and analytic derivatives

  10. Atmospheric dispersion estimates in the vicinity of buildings

    International Nuclear Information System (INIS)

    Ramsdell, J.V. Jr.; Fosmire, C.J.

    1995-01-01

    A model describing atmospheric dispersion in the vicinity of buildings was developed for the U.S. Nuclear Regulatory Commission (NRC) in the late 1980s. That model has recently undergone additional peer review. The reviewers identified four areas of concern related to the model and its application. This report describes revisions to the model in response to the reviewers concerns. Model revision involved incorporation of explicit treatment of enhanced dispersion at low wind speeds in addition to explicit treatment of enhanced dispersion at high speeds resulting from building wakes. Model parameters are evaluated from turbulence data. Experimental diffusion data from seven reactor sites are used for model evaluation. Compared with models recommended in current NRC guidance to licensees, the revised model is less biased and shows more predictive skill. The revised model is also compared with two non-Gaussian models developed to estimate maximum concentrations in building wakes. The revised model concentration predictions are nearly the same as the predictions of the non-Gaussian models. On the basis of these comparisons of the revised model concentration predictions with experimental data and the predictions of other models, the revised model is found to be an appropriate model for estimating concentrations in the vicinity of buildings

  11. Proportionate Minimum Error Entropy Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-08-01

    Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.

  12. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    Science.gov (United States)

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  13. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit

    2014-07-28

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  14. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2014-01-01

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  15. Gaussian process tomography for soft x-ray spectroscopy at WEST without equilibrium information

    Science.gov (United States)

    Wang, T.; Mazon, D.; Svensson, J.; Li, D.; Jardin, A.; Verdoolaege, G.

    2018-06-01

    Gaussian process tomography (GPT) is a recently developed tomography method based on the Bayesian probability theory [J. Svensson, JET Internal Report EFDA-JET-PR(11)24, 2011 and Li et al., Rev. Sci. Instrum. 84, 083506 (2013)]. By modeling the soft X-ray (SXR) emissivity field in a poloidal cross section as a Gaussian process, the Bayesian SXR tomography can be carried out in a robust and extremely fast way. Owing to the short execution time of the algorithm, GPT is an important candidate for providing real-time reconstructions with a view to impurity transport and fast magnetohydrodynamic control. In addition, the Bayesian formalism allows quantifying uncertainty on the inferred parameters. In this paper, the GPT technique is validated using a synthetic data set expected from the WEST tokamak, and the results are shown of its application to the reconstruction of SXR emissivity profiles measured on Tore Supra. The method is compared with the standard algorithm based on minimization of the Fisher information.

  16. MESOI, an interactive atmospheric dispersion model for emergency response applications

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1984-01-01

    MESOI is an interactive atmospheric dispersion model that has been developed for use by the U.S. Department of Energy, and the U.S. Nuclear Regulatory Commission in responding to emergencies at nuclear facilities. MESOI uses both straight-line Gaussian plume and Lagrangian trajectory Gaussian puff models to estimate time-integrated ground-level air and surface concentrations. Puff trajectories are determined from temporally and spatially varying horizontal wind fields that are defined in 3 dimensions. Other processes treated in MESOI include dry deposition, wet deposition and radioactive decay

  17. Methods for calculating population dose from atmospheric dispersion of radioactivity

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, B L; Jow, H N; Lee, I S [Pittsburgh Univ., PA (USA)

    1978-06-01

    Curves are computed from which population dose (man-rem) due to dispersal of radioactivity from a point source can be calculated in the gaussian plume model by simple multiplication, and methods of using them and their limitations are considered. Illustrative examples are presented.

  18. A range-based predictive localization algorithm for WSID networks

    Science.gov (United States)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  19. A modified Gaussian model for the thermal plume from a ground-based heat source in a cross-wind

    International Nuclear Information System (INIS)

    Selander, W.N.; Barry, P.J.; Robertson, E.

    1990-06-01

    An array of propane burners operating at ground level in a cross-wind was used as a heat source to establish a blown-over thermal plume. A three-dimensional array of thermocouples was used to continuously measure the plume temperature downwind from the source. The resulting data were used to correlate the parameters of a modified Gaussian model for plume rise and dispersion with source strength, wind speed, and atmospheric dispersion parameters

  20. Quantum information with Gaussian states

    International Nuclear Information System (INIS)

    Wang Xiangbin; Hiroshima, Tohya; Tomita, Akihisa; Hayashi, Masahito

    2007-01-01

    Quantum optical Gaussian states are a type of important robust quantum states which are manipulatable by the existing technologies. So far, most of the important quantum information experiments are done with such states, including bright Gaussian light and weak Gaussian light. Extending the existing results of quantum information with discrete quantum states to the case of continuous variable quantum states is an interesting theoretical job. The quantum Gaussian states play a central role in such a case. We review the properties and applications of Gaussian states in quantum information with emphasis on the fundamental concepts, the calculation techniques and the effects of imperfections of the real-life experimental setups. Topics here include the elementary properties of Gaussian states and relevant quantum information device, entanglement-based quantum tasks such as quantum teleportation, quantum cryptography with weak and strong Gaussian states and the quantum channel capacity, mathematical theory of quantum entanglement and state estimation for Gaussian states

  1. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    Science.gov (United States)

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  2. A new algorithm for ECG interference removal from single channel EMG recording.

    Science.gov (United States)

    Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein

    2017-09-01

    This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.

  3. Estimating plume dispersion: a comparison of several sigma schemes

    International Nuclear Information System (INIS)

    Irwin, J.S.

    1983-01-01

    The lateral and vertical Gaussian plume dispersion parameters are estimated and compared with field tracer data collected at 11 sites. The dispersion parameter schemes used in this analysis include Cramer's scheme, suggested for tall stack dispersion estimates, Draxler's scheme, suggested for elevated and surface releases, Pasquill's scheme, suggested for interim use in dispersion estimates, and the Pasquill--Gifford scheme using Turner's technique for assigning stability categories. The schemes suggested by Cramer, Draxler and Pasquill estimate the dispersion parameters using onsite measurements of the vertical and lateral wind-velocity variances at the effective release height. The performances of these schemes in estimating the dispersion parameters are compared with that of the Pasquill--Gifford scheme, using the Prairie Grass and Karlsruhe data. For these two experiments, the estimates of the dispersion parameters using Draxler's scheme correlate better with the measurements than did estimates using the Pasquill--Gifford scheme. Comparison of the dispersion parameter estimates with the measurement suggests that Draxler's scheme for characterizing the dispersion results in the smallest mean fractional error in the estimated dispersion parameters and the smallest variance of the fractional errors

  4. When non-Gaussian states are Gaussian: Generalization of nonseparability criterion for continuous variables

    International Nuclear Information System (INIS)

    McHugh, Derek; Buzek, Vladimir; Ziman, Mario

    2006-01-01

    We present a class of non-Gaussian two-mode continuous-variable states for which the separability criterion for Gaussian states can be employed to detect whether they are separable or not. These states reduce to the two-mode Gaussian states as a special case

  5. A hybrid plume model for local-scale dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.

    1997-12-31

    The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.

  6. Numerical methods of estimating the dispersion of radionuclides in atmosphere

    International Nuclear Information System (INIS)

    Vladu, Mihaela; Ghitulescu, Alina; Popescu, Gheorghe; Piciorea, Iuliana

    2007-01-01

    Full text: The paper presents the method of dispersion calculation, witch can be applied for the DLE calculation. This is necessary to ensure a secure performance of the Experimental Pilot Plant for Tritium and Deuterium Separation (using the technology for detritiation based upon isotope catalytic exchange between tritiated heavy water and deuterium followed by cryogenic distillation of the hydrogen isotopes). For the calculation of the dispersion of radioactivity effluents in the atmosphere, at a given distance between source and receiver, the Gaussian mathematical model was used. This model is currently applied for estimating the long-term results of dispersion in case of continuous or intermittent emissions as basic information for long-term radioprotection measures for areas of the order of kilometres from the source. We have considered intermittent or continuous emissions of intensity lower than 1% per day relative to the annual emission. It is supposed that the radioactive material released into environment presents a gaussian dispersion both in horizontal and vertical plan. The local dispersion parameters could be determined directly with turbulence measurements or indirectly by determination of atmospheric stability. Weather parameters for characterizing the atmospheric dispersion include: - direction of wind relative to the source; - the speed of the wind at the height of emission; - parameters of dispersion to different distances, depending on the atmospheric turbulence which characterizes the mixing of radioactive materials in the atmosphere; - atmospheric stability range; - the height of mixture stratum; - the type and intensity of precipitations. The choice of the most adequate version of Gaussian model depends on the relation among the height where effluent emission is in progress, H (m), and the height at which the buildings influence the air motion, HB (m). There were defined three zones of distinct dispersion. This zones can have variable lengths

  7. Relationship between the complex susceptibility and the plasma dispersion function

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez D, H.; Cabral P, A

    1991-04-15

    It is shown that when magnetization processes in a spin system and resonant excitation of spin n states occur in the presence of internal and or external random line-broadening mechanisms, the complex magnetic susceptibility of the plasma dispersion function. letter could be useful spin in system is proportional to the relationship found in this spectroscopies such as EPR and NMR, for example, as its fitting to experimental absorption and dispersion profiles produces their Lorentzian and Gaussian contents. (Author)

  8. Relationship between the complex susceptibility and the plasma dispersion function

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1991-04-01

    It is shown that when magnetization processes in a spin system and resonant excitation of spin n states occur in the presence of internal and or external random line-broadening mechanisms, the complex magnetic susceptibility of the plasma dispersion function. letter could be useful spin in system is proportional to the relationship found in this spectroscopies such as EPR and NMR, for example, as its fitting to experimental absorption and dispersion profiles produces their Lorentzian and Gaussian contents. (Author)

  9. The Gaussian streaming model and convolution Lagrangian effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  10. The backward phase flow and FBI-transform-based Eulerian Gaussian beams for the Schroedinger equation

    International Nuclear Information System (INIS)

    Leung Shingyu; Qian Jianliang

    2010-01-01

    We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schroedinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in . In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying the FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms.

  11. Graphical Gaussian models with edge and vertex symmetries

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Lauritzen, Steffen L

    2008-01-01

    We introduce new types of graphical Gaussian models by placing symmetry restrictions on the concentration or correlation matrix. The models can be represented by coloured graphs, where parameters that are associated with edges or vertices of the same colour are restricted to being identical. We...... study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...

  12. Turbulent Plume Dispersion over Two-dimensional Idealized Urban Street Canyons

    Science.gov (United States)

    Wong, C. C. C.; Liu, C. H.

    2012-04-01

    Human activities are the primary pollutant sources which degrade the living quality in the current era of dense and compact cities. A simple and reasonably accurate pollutant dispersion model is helpful to reduce pollutant concentrations in city or neighborhood scales by refining architectural design or urban planning. The conventional method to estimate the pollutant concentration from point/line sources is the Gaussian plume model using empirical dispersion coefficients. Its accuracy is pretty well for applying to rural areas. However, the dispersion coefficients only account for the atmospheric stability and streamwise distance that often overlook the roughness of urban surfaces. Large-scale buildings erected in urban areas significantly modify the surface roughness that in turn affects the pollutant transport in the urban canopy layer (UCL). We hypothesize that the aerodynamic resistance is another factor governing the dispersion coefficient in the UCL. This study is thus conceived to study the effects of urban roughness on pollutant dispersion coefficients and the plume behaviors. Large-eddy simulations (LESs) are carried out to examine the plume dispersion from a ground-level pollutant source over idealized 2D street canyons in neutral stratification. Computations with a wide range of aspect ratios (ARs), including skimming flow to isolated flow regimes, are conducted. The vertical profiles of pollutant distribution for different values of friction factor are compared that all reach a self-similar Gaussian shape. Preliminary results show that the pollutant dispersion is closely related to the friction factor. For relatively small roughness, the factors of dispersion coefficient vary linearly with the friction factor until the roughness is over a certain level. When the friction factor is large, its effect on the dispersion coefficient is less significant. Since the linear region covers at least one-third of the full range of friction factor in our empirical

  13. Inferring Trial-to-Trial Excitatory and Inhibitory Synaptic Inputs from Membrane Potential using Gaussian Mixture Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Milad eLankarany

    2013-09-01

    Full Text Available Time-varying excitatory and inhibitory synaptic inputs govern activity of neurons and process information in the brain. The importance of trial-to-trial fluctuations of synaptic inputs has recently been investigated in neuroscience. Such fluctuations are ignored in the most conventional techniques because they are removed when trials are averaged during linear regression techniques. Here, we propose a novel recursive algorithm based on Gaussian mixture Kalman filtering for estimating time-varying excitatory and inhibitory synaptic inputs from single trials of noisy membrane potential in current clamp recordings. The Kalman filtering is followed by an expectation maximization algorithm to infer the statistical parameters (time-varying mean and variance of the synaptic inputs in a non-parametric manner. As our proposed algorithm is repeated recursively, the inferred parameters of the mixtures are used to initiate the next iteration. Unlike other recent algorithms, our algorithm does not assume an a priori distribution from which the synaptic inputs are generated. Instead, the algorithm recursively estimates such a distribution by fitting a Gaussian mixture model. The performance of the proposed algorithms is compared to a previously proposed PF-based algorithm (Paninski et al., 2012 with several illustrative examples, assuming that the distribution of synaptic input is unknown. If noise is small, the performance of our algorithms is similar to that of the previous one. However, if noise is large, they can significantly outperform the previous proposal. These promising results suggest that our algorithm is a robust and efficient technique for estimating time varying excitatory and inhibitory synaptic conductances from single trials of membrane potential recordings.

  14. Extragalactic dispersion measures of fast radio bursts

    International Nuclear Information System (INIS)

    Xu, Jun; Han, J. L.

    2015-01-01

    Fast radio bursts show large dispersion measures, much larger than the Galactic dispersion measure foreground. Therefore, they evidently have an extragalactic origin. We investigate possible contributions to the dispersion measure from host galaxies. We simulate the spatial distribution of fast radio bursts and calculate the dispersion measures along the sightlines from fast radio bursts to the edge of host galaxies by using the scaled NE2001 model for thermal electron density distributions. We find that contributions to the dispersion measure of fast radio bursts from the host galaxy follow a skew Gaussian distribution. The peak and the width at half maximum of the dispersion measure distribution increase with the inclination angle of a spiral galaxy, to large values when the inclination angle is over 70°. The largest dispersion measure produced by an edge-on spiral galaxy can reach a few thousand pc cm −3 , while the dispersion measures from dwarf galaxies and elliptical galaxies have a maximum of only a few tens of pc cm −3 . Notice, however, that additional dispersion measures of tens to hundreds of pc cm −3 can be produced by high density clumps in host galaxies. Simulations that include dispersion measure contributions from the Large Magellanic Cloud and the Andromeda Galaxy are shown as examples to demonstrate how to extract the dispersion measure from the intergalactic medium. (paper)

  15. High Precision Edge Detection Algorithm for Mechanical Parts

    Science.gov (United States)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  16. Dispersant effectiveness: Studies into the causes of effectiveness variations

    International Nuclear Information System (INIS)

    Fingas, M.F.; Kyle, D.; Tennyson, E.

    1995-01-01

    Effectiveness, a key issue of using dispersants, is affected by many interrelated factors. The principal factors involved are the oil composition, dispersant formulation, sea surface turbulence and dispersant quantity. Oil composition is a very strong determinant. Current dispersant formulation effectiveness correlates strongly with the amount of saturate component in the oil. The other components of the oil, the asphaltenes, resins or polars and aromatic fractions show a negative correlation with the dispersant effectiveness. Viscosity is also a predictor of dispersant effectiveness and may have an effect because it is in turn determined by oil composition. Dispersant composition is significant and interacts with oil composition. Dispersants show high effectiveness at HLB values near 10. Sea turbulence strongly affects dispersant effectiveness.Effectiveness rises with increasing turbulence to a maximum value. Effectiveness for current commercial dispersants is gaussian around a peak salinity value. Peak effectiveness is achieved at very high dispersant quantities--at a ratio of 1:5, dispersant-to-oil volume. Dispersant effectiveness for those oils tested and under the conditions measured, is approximately logarithmic with dispersant quantity and will reach about 50% of its peak value at a dispersant to oil ratio of about 1:20 and near zero at a ratio of about 1:50

  17. Numerical investigations of non-collinear optical parametric chirped pulse amplification for Laguerre-Gaussian vortex beam

    Science.gov (United States)

    Xu, Lu; Yu, Lianghong; Liang, Xiaoyan

    2016-04-01

    We present for the first time a scheme to amplify a Laguerre-Gaussian vortex beam based on non-collinear optical parametric chirped pulse amplification (OPCPA). In addition, a three-dimensional numerical model of non-collinear optical parametric amplification was deduced in the frequency domain, in which the effects of non-collinear configuration, temporal and spatial walk-off, group-velocity dispersion and diffraction were also taken into account, to trace the dynamics of the Laguerre-Gaussian vortex beam and investigate its critical parameters in the non-collinear OPCPA process. Based on the numerical simulation results, the scheme shows promise for implementation in a relativistic twisted laser pulse system, which will diversify the light-matter interaction field.

  18. Dispersion of acoustic surface waves by velocity gradients

    Science.gov (United States)

    Kwon, S. D.; Kim, H. C.

    1987-10-01

    The perturbation theory of Auld [Acoustic Fields and Waves in Solids (Wiley, New York, 1973), Vol. II, p. 294], which describes the effect of a subsurface gradient on the velocity dispersion of surface waves, has been modified to a simpler form by an approximation using a newly defined velocity gradient for the case of isotropic materials. The modified theory is applied to nitrogen implantation in AISI 4140 steel with a velocity gradient of Gaussian profile, and compared with dispersion data obtained by the ultrasonic right-angle technique in the frequency range from 2.4 to 14.8 MHz. The good agreement between experiments and our theory suggests that the compound layer in the subsurface region plays a dominant role in causing the dispersion of acoustic surface waves.

  19. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Marquardt algorithm by varying conditions such as inputs, hidden neurons, initialization, training sets and random Gaussian noise injection to ... Several such ensembles formed the population which was evolved to generate the fittest ensemble.

  20. Sensitivity, applicability and validation of bi-gaussian off- and on-line models for the evaluation of the consequences of accidental releases in nuclear facilities

    International Nuclear Information System (INIS)

    Kretzschmar, J.G.; Mertens, I.; Vanderborght, B.

    1984-01-01

    A computer code CAERS (Computer Aided Emergency Response System) has been developed for the simulation of the short-term concentrations caused by an atmospheric emission. The concentration calculations are based on the bi-gaussian theorem with the possibility of using twelve different sets of turbulence typing schemes and dispersion parameters or the plume can be simulated with a bi-dimensional puff trajectory model with tri-gaussian diffusion of the puffs. With the puff trajectory model the emission and the wind conditions can be variable in time. Sixteen SF 6 tracer dispersion experiments, with mobile as well as stationary time averaging sampling, have been carried out for the validation of the on-line and off-line models of CAERS. The tracer experiments of this study have shown that the CAERS system, using the bi-gaussian model and the SCK/CEN turbulence typing scheme, can simulate short time concentration levels very well. The variations of the plume under non-steady emission and meteo conditions are well simulated by the puff trajectory model. This leads to the general conclusion that the atmospheric dispersion models of the CAERS system can give a significant contribution to the management and the interpretation of air pollution concentration measurements in emergency situations

  1. Adaptive Electronic Dispersion Compensator for Chromatic and Polarization-Mode Dispersions in Optical Communication Systems

    Directory of Open Access Journals (Sweden)

    Koc Ut-Va

    2005-01-01

    Full Text Available The widely-used LMS algorithm for coefficient updates in adaptive (feedforward/decision-feedback equalizers is found to be suboptimal for ASE-dominant systems but various coefficient-dithering approaches suffer from slow adaptation rate without guarantee of convergence. In view of the non-Gaussian nature of optical noise after the square-law optoelectronic conversion, we propose to apply the higher-order least-mean 2 th-order (LMN algorithms resulting in OSNR penalty which is 1.5–2 dB less than that of LMS. Furthermore, combined with adjustable slicer threshold control, the proposed equalizer structures are demonstrated through extensive Monte Carlo simulations to achieve better performance.

  2. Laser Raman detection of platelets for early and differential diagnosis of Alzheimer’s disease based on an adaptive Gaussian process classification algorithm

    International Nuclear Information System (INIS)

    Luo, Yusheng; Du, Z W; Yang, Y J; Chen, P; Wang, X H; Cheng, Y; Peng, J; Shen, A G; Hu, J M; Tian, Q; Shang, X L; Liu, Z C; Yao, X Q; Wang, J Z

    2013-01-01

    Early and differential diagnosis of Alzheimer’s disease (AD) has puzzled many clinicians. In this work, laser Raman spectroscopy (LRS) was developed to diagnose AD from platelet samples from AD transgenic mice and non-transgenic controls of different ages. An adaptive Gaussian process (GP) classification algorithm was used to re-establish the classification models of early AD, advanced AD and the control group with just two features and the capacity for noise reduction. Compared with the previous multilayer perceptron network method, the GP showed much better classification performance with the same feature set. Besides, spectra of platelets isolated from AD and Parkinson’s disease (PD) mice were also discriminated. Spectral data from 4 month AD (n = 39) and 12 month AD (n = 104) platelets, as well as control data (n = 135), were collected. Prospective application of the algorithm to the data set resulted in a sensitivity of 80%, a specificity of about 100% and a Matthews correlation coefficient of 0.81. Samples from PD (n = 120) platelets were also collected for differentiation from 12 month AD. The results suggest that platelet LRS detection analysis with the GP appears to be an easier and more accurate method than current ones for early and differential diagnosis of AD. (paper)

  3. Comparison of Pilot Symbol Embedded Channel Estimation Algorithms

    Directory of Open Access Journals (Sweden)

    P. Kadlec

    2009-12-01

    Full Text Available In the paper, algorithms of the pilot symbol embedded channel estimation are compared. Attention is turned to the Least Square (LS channel estimation and the Sliding Correlator (SC algorithm. Both algorithms are implemented in Matlab to estimate the Channel Impulse Response (CIR of a channel exhibiting multi-path propagation. Algorithms are compared from the viewpoint of computational demands, influence of the Additive White Gaussian Noise (AWGN, an embedded pilot symbol and a computed CIR over the estimation error.

  4. An Analytical Method for the Abel Inversion of Asymmetrical Gaussian Profiles

    International Nuclear Information System (INIS)

    Xu Guosheng; Wan Baonian

    2007-01-01

    An analytical algorithm for fast calculation of the Abel inversion for density profile measurement in tokamak is developed. Based upon the assumptions that the particle source is negligibly small in the plasma core region, density profiles can be approximated by an asymmetrical Gaussian distribution controlled only by one parameter V 0 /D and V 0 /D is constant along the radial direction, the analytical algorithm is presented and examined against a testing profile. The validity is confirmed by benchmark with the standard Abel inversion method and the theoretical profile. The scope of application as well as the error analysis is also discussed in detail

  5. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    Science.gov (United States)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  6. Preliminary analysis of accidents of the Santa QuitÉRia Project: gaussian methodology for atmospheric dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Anjos, Gullit Diego C. dos, E-mail: gullitcardoso@inb.gov.br [Indústrias Nucleares do Brasil (INB), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    The Santa Quitéria Project (PSQ) is an enterprise that aims at the production of phosphate compounds as main products, and of uranium concentrates as by-products, from the minerals of the Itataia deposit. The intended area for implementation of the project is located in the municipality of Santa Quitéria, north central region of the State of Ceará. The U.S. Nuclear Regulatory Commission, lists the basic design accidents for a Uranium Processing Plant, such as the PSQ. Among all these scenarios,fire in uranium extraction cells was the one with the highest doses released. For simulation of the fire event, the atmospheric dispersion model used was the standard Gaussian plume model. The doses were calculated for two sets of meteorological conditions: stability class F, with wind velocity of 1 m/s and class D stability with wind velocity of 4.5 m/s. For PSQ, it was considered that there will be no public individual until after 2000 meters from the release point. The doses corresponding to the Occupationally exposed individuals are: 2.0E-3 mSv (class D) and 5 mSv (Class F), 300 meters away from the event. Analyzing the results, it can be concluded that there are no significant radiological consequences for the occupationally exposed individual, both in the stability class D and the F. For PSQ, it was considered that there will be no public individual until after 2000 meters from the release point. The doses, in mSv, corresponding to the individuals located near the project are: Morrinhos (1.27E-03 mSv - Class D and 3.92E-02 - Class F); Burned (1.41E-03 mSv - Class D and 4.28E-02 - Class F); Lagoa do Mato (<7.94E-04 mSv - Class D and <2.68E-02 mv - Class F). (author)

  7. Preliminary analysis of accidents of the Santa QuitÉRia Project: gaussian methodology for atmospheric dispersion

    International Nuclear Information System (INIS)

    Anjos, Gullit Diego C. dos

    2017-01-01

    The Santa Quitéria Project (PSQ) is an enterprise that aims at the production of phosphate compounds as main products, and of uranium concentrates as by-products, from the minerals of the Itataia deposit. The intended area for implementation of the project is located in the municipality of Santa Quitéria, north central region of the State of Ceará. The U.S. Nuclear Regulatory Commission, lists the basic design accidents for a Uranium Processing Plant, such as the PSQ. Among all these scenarios,fire in uranium extraction cells was the one with the highest doses released. For simulation of the fire event, the atmospheric dispersion model used was the standard Gaussian plume model. The doses were calculated for two sets of meteorological conditions: stability class F, with wind velocity of 1 m/s and class D stability with wind velocity of 4.5 m/s. For PSQ, it was considered that there will be no public individual until after 2000 meters from the release point. The doses corresponding to the Occupationally exposed individuals are: 2.0E-3 mSv (class D) and 5 mSv (Class F), 300 meters away from the event. Analyzing the results, it can be concluded that there are no significant radiological consequences for the occupationally exposed individual, both in the stability class D and the F. For PSQ, it was considered that there will be no public individual until after 2000 meters from the release point. The doses, in mSv, corresponding to the individuals located near the project are: Morrinhos (1.27E-03 mSv - Class D and 3.92E-02 - Class F); Burned (1.41E-03 mSv - Class D and 4.28E-02 - Class F); Lagoa do Mato (<7.94E-04 mSv - Class D and <2.68E-02 mv - Class F). (author)

  8. Performance in population models for count data, part II: a new SAEM algorithm

    Science.gov (United States)

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  9. Fractal scattering of Gaussian solitons in directional couplers with logarithmic nonlinearities

    Energy Technology Data Exchange (ETDEWEB)

    Teixeira, Rafael M.P.; Cardoso, Wesley B., E-mail: wesleybcardoso@gmail.com

    2016-08-12

    In this paper we study the interaction of Gaussian solitons in a dispersive and nonlinear media with log-law nonlinearity. The model is described by the coupled logarithmic nonlinear Schrödinger equations, which is a nonintegrable system that allows the observation of a very rich scenario in the collision patterns. By employing a variational approach and direct numerical simulations, we observe a fractal-scattering phenomenon from the exit velocities of each soliton as a function of the input velocities. Furthermore, we introduce a linearization model to identify the position of the reflection/transmission window that emerges within the chaotic region. This enables us the possibility of controlling the scattering of solitons as well as the lifetime of bound states. - Highlights: • We study the interaction of Gaussian solitons in a system with log-law nonlinearity. • The model is described by the coupled logarithmic nonlinear Schrödinger equations. • We observe a fractal-scattering phenomenon of the solitons.

  10. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  11. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  12. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Probabilistic electricity price forecasting with variational heteroscedastic Gaussian process and active learning

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Lin; Lou, Jianyong

    2015-01-01

    Highlights: • A novel active learning model for the probabilistic electricity price forecasting. • Heteroscedastic Gaussian process that captures the local volatility of the electricity price. • Variational Bayesian learning that avoids over-fitting. • Active learning algorithm that reduces the computational efforts. - Abstract: Electricity price forecasting is essential for the market participants in their decision making. Nevertheless, the accuracy of such forecasting cannot be guaranteed due to the high variability of the price data. For this reason, in many cases, rather than merely point forecasting results, market participants are more interested in the probabilistic price forecasting results, i.e., the prediction intervals of the electricity price. Focusing on this issue, this paper proposes a new model for the probabilistic electricity price forecasting. This model is based on the active learning technique and the variational heteroscedastic Gaussian process (VHGP). It provides the heteroscedastic Gaussian prediction intervals, which effectively quantify the heteroscedastic uncertainties associated with the price data. Because the high computational effort of VHGP hinders its application to the large-scale electricity price forecasting tasks, we design an active learning algorithm to select a most informative training subset from the whole available training set. By constructing the forecasting model on this smaller subset, the computational efforts can be significantly reduced. In this way, the practical applicability of the proposed model is enhanced. The forecasting performance and the computational time of the proposed model are evaluated using the real-world electricity price data, which is obtained from the ANEM, PJM, and New England ISO

  14. Pollutant Plume Dispersion over Hypothetical Urban Areas based on Wind Tunnel Measurements

    Science.gov (United States)

    Mo, Ziwei; Liu, Chun-Ho

    2017-04-01

    Gaussian plume model is commonly adopted for pollutant concentration prediction in the atmospheric boundary layer (ABL). However, it has a number of limitations being applied to pollutant dispersion over complex land-surface morphology. In this study, the friction factor (f), as a measure of aerodynamic resistance induced by rough surfaces in the engineering community, was proposed to parameterize the vertical dispersion coefficient (σz) in the Gaussian model. A series of wind tunnel experiments were carried out to verify the mathematical hypothesis and to characterize plume dispersion as a function of surface roughness as well. Hypothetical urban areas, which were assembled in the form of idealized street canyons of different aspect (building-height-to-street-width) ratios (AR = 1/2, 1/4, 1/8 and 1/12), were fabricated by aligning identical square aluminum bars at different separation apart in cross flows. Pollutant emitted from a ground-level line source into the turbulent boundary layer (TBL) was simulated using water vapour generated by ultrasonic atomizer. The humidity and the velocity (mean and fluctuating components) were measured, respectively, by humidity sensors and hot-wire anemometry (HWA) with X-wire probes in streamwise and vertical directions. Wind tunnel results showed that the pollutant concentration exhibits the conventional Gaussian distribution, suggesting the feasibility of using water vapour as a passive scalar in wind tunnel experiments. The friction factor increased with decreasing aspect ratios (widening the building separation). It was peaked at AR = 1/8 and decreased thereafter. Besides, a positive correlation between σz/xn (x is the distance from the pollutant source) and f1/4 (correlation coefficient r2 = 0.61) was observed, formulating the basic parameterization of plume dispersion over urban areas.

  15. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    Science.gov (United States)

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  16. MixSim : An R Package for Simulating Data to Study Performance of Clustering Algorithms

    Directory of Open Access Journals (Sweden)

    Volodymyr Melnykov

    2012-11-01

    Full Text Available The R package MixSim is a new tool that allows simulating mixtures of Gaussian distributions with different levels of overlap between mixture components. Pairwise overlap, defined as a sum of two misclassification probabilities, measures the degree of interaction between components and can be readily employed to control the clustering complexity of datasets simulated from mixtures. These datasets can then be used for systematic performance investigation of clustering and finite mixture modeling algorithms. Among other capabilities of MixSim, there are computing the exact overlap for Gaussian mixtures, simulating Gaussian and non-Gaussian data, simulating outliers and noise variables, calculating various measures of agreement between two partitionings, and constructing parallel distribution plots for the graphical display of finite mixture models. All features of the package are illustrated in great detail. The utility of the package is highlighted through a small comparison study of several popular clustering algorithms.

  17. A Robust Parallel Algorithm for Combinatorial Compressed Sensing

    Science.gov (United States)

    Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian

    2018-04-01

    In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.

  18. Enhancements to AERMOD's building downwash algorithms based on wind-tunnel and Embedded-LES modeling

    Science.gov (United States)

    Monbureau, E. M.; Heist, D. K.; Perry, S. G.; Brouwer, L. H.; Foroutan, H.; Tang, W.

    2018-04-01

    Knowing the fate of effluent from an industrial stack is important for assessing its impact on human health. AERMOD is one of several Gaussian plume models containing algorithms to evaluate the effect of buildings on the movement of the effluent from a stack. The goal of this study is to improve AERMOD's ability to accurately model important and complex building downwash scenarios by incorporating knowledge gained from a recently completed series of wind tunnel studies and complementary large eddy simulations of flow and dispersion around simple structures for a variety of building dimensions, stack locations, stack heights, and wind angles. This study presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD's building pre-processor to better represent elongated buildings in oblique winds. These modifications are demonstrated to improve the ability of AERMOD to model observed ground-level concentrations in the vicinity of a building for the variety of conditions examined in the wind tunnel and numerical studies.

  19. Vortices in Gaussian beams

    CSIR Research Space (South Africa)

    Roux, FS

    2009-01-01

    Full Text Available , t0)} = P(du, dv) {FR{g(u, v, t0)}} Replacement: u→ du = t− t0 i2 ∂ ∂u′ v → dv = t− t0 i2 ∂ ∂v′ CSIR National Laser Centre – p.13/30 Differentiation i.s.o integration Evaluate the integral over the Gaussian beam (once and for all). Then, instead... . Gaussian beams with vortex dipoles CSIR National Laser Centre – p.2/30 Gaussian beam notation Gaussian beam in normalised coordinates: g(u, v, t) = exp ( −u 2 + v2 1− it ) u = xω0 v = yω0 t = zρ ρ = piω20 λ ω0 — 1/e2 beam waist radius; ρ— Rayleigh range ω ω...

  20. A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control.

    Science.gov (United States)

    Han, Min; Fan, Jianchao; Wang, Jun

    2011-09-01

    A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays.

  1. Assessment of Safety Parameters for Radiological Explosion Based on Gaussian Dispersion Model

    Energy Technology Data Exchange (ETDEWEB)

    Pandey, Alok [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yu, Hyungjoon; Kim, Hong Suk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2014-10-15

    These sources if used with explosive (called RDD - radiological dispersion device), can cause dispersion of radioactive material resulting in public exposure and contamination of the environment. Radiological explosion devices are not weapons for the mass destruction like atom bombs, but can cause the death of few persons and contamination of large areas. The reduction of the threat of radiological weapon attack by terrorist groups causing dispersion of radioactive material is one of the priority tasks of the IAEA Nuclear Safety and Security Program.Emergency preparedness is an essential part for reducing and mitigating radiological weapon threat. Preliminary assessment of dispersion study followed by radiological explosion and its quantitative effect will be helpful for the emergency preparedness team for an early response. The effect of the radiological dispersion depends on various factors like radioisotope, its activity, physical form, amount of explosive used and meteorological factors at the time of an explosion. This study aim to determine the area affected by the radiological explosion as pre assessment to provide feedback to emergency management teams for handling and mitigation the situation after an explosion. Most practical scenarios of radiological explosion are considered with conservative approach for the assessment of the area under a threat for emergency handling and management purpose. Radioisotopes under weak security controls can be used for a radiological explosion to create terror and socioeconomic threat for the public. Prior assessment of radiological threats is helpful for emergency management teams to take prompt decision about evacuation of the affected area and other emergency handling actions. Comparable activities of Co-60 source used in radiotherapy and Sr-90 source of disused and orphaned RTGs with two different quantities of TNT were used for the scenario development of radiological explosion. In the Basic Safety Standard (BSS

  2. Assessment of Safety Parameters for Radiological Explosion Based on Gaussian Dispersion Model

    International Nuclear Information System (INIS)

    Pandey, Alok; Yu, Hyungjoon; Kim, Hong Suk

    2014-01-01

    These sources if used with explosive (called RDD - radiological dispersion device), can cause dispersion of radioactive material resulting in public exposure and contamination of the environment. Radiological explosion devices are not weapons for the mass destruction like atom bombs, but can cause the death of few persons and contamination of large areas. The reduction of the threat of radiological weapon attack by terrorist groups causing dispersion of radioactive material is one of the priority tasks of the IAEA Nuclear Safety and Security Program.Emergency preparedness is an essential part for reducing and mitigating radiological weapon threat. Preliminary assessment of dispersion study followed by radiological explosion and its quantitative effect will be helpful for the emergency preparedness team for an early response. The effect of the radiological dispersion depends on various factors like radioisotope, its activity, physical form, amount of explosive used and meteorological factors at the time of an explosion. This study aim to determine the area affected by the radiological explosion as pre assessment to provide feedback to emergency management teams for handling and mitigation the situation after an explosion. Most practical scenarios of radiological explosion are considered with conservative approach for the assessment of the area under a threat for emergency handling and management purpose. Radioisotopes under weak security controls can be used for a radiological explosion to create terror and socioeconomic threat for the public. Prior assessment of radiological threats is helpful for emergency management teams to take prompt decision about evacuation of the affected area and other emergency handling actions. Comparable activities of Co-60 source used in radiotherapy and Sr-90 source of disused and orphaned RTGs with two different quantities of TNT were used for the scenario development of radiological explosion. In the Basic Safety Standard (BSS

  3. Simulation of atmospheric dispersion of radionuclides using an Eulerian-Lagrangian modelling system.

    Science.gov (United States)

    Basit, Abdul; Espinosa, Francisco; Avila, Ruben; Raza, S; Irfan, N

    2008-12-01

    In this paper we present an atmospheric dispersion scenario for a proposed nuclear power plant in Pakistan involving the hypothetical accidental release of radionuclides. For this, a concept involving a Lagrangian stochastic particle model (LSPM) coupled with an Eulerian regional atmospheric modelling system (RAMS) is used. The atmospheric turbulent dispersion of radionuclides (represented by non-buoyant particles/neutral traces) in the LSPM is modelled by applying non-homogeneous turbulence conditions. The mean wind velocities governed by the topography of the region and the surface fluxes of momentum and heat are calculated by the RAMS code. A moving least squares (MLS) technique is introduced to calculate the concentration of radionuclides at ground level. The numerically calculated vertical profiles of wind velocity and temperature are compared with observed data. The results obtained demonstrate that in regions of complex terrain it is not sufficient to model the atmospheric dispersion of particles using a straight-line Gaussian plume model, and that by utilising a Lagrangian stochastic particle model and regional atmospheric modelling system a much more realistic estimation of the dispersion in such a hypothetical scenario was ascertained. The particle dispersion results for a 12 h ground release show that a triangular area of about 400 km(2) situated in the north-west quadrant of release is under radiological threat. The particle distribution shows that the use of a Gaussian plume model (GPM) in such situations will yield quite misleading results.

  4. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan; Genton, Marc G.

    2017-01-01

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  5. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan

    2017-07-13

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  6. High Precision Edge Detection Algorithm for Mechanical Parts

    Directory of Open Access Journals (Sweden)

    Duan Zhenyun

    2018-04-01

    Full Text Available High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  7. Analytical relation between effective mode field area and waveguide dispersion in microstructure fibers.

    Science.gov (United States)

    Moenster, Mathias; Steinmeyer, Günter; Iliew, Rumen; Lederer, Falk; Petermann, Klaus

    2006-11-15

    For optical fibers exhibiting a radially symmetric refractive index profile, there exists an analytical relation that connects waveguide dispersion and the Petermann-II mode field radius. We extend the usefulness of this relation to the nonradially symmetric case of microstructure fibers in the anomalous dispersion regime, yielding a simple relation between dispersion and effective mode field area. Assuming a Gaussian mode distribution, we derive a fundamental upper limit for the effective mode field area that is required to obtain a certain amount of anomalous waveguide dispersion. This relation is demonstrated to show excellent agreement for fiber designs suited for supercontinuum generation and soliton lasers in the near infrared.

  8. Stack emission monitoring using non-dispersive infrared spectroscopy with an optimized nonlinear absorption cross interference correction algorithm

    Directory of Open Access Journals (Sweden)

    Y. W. Sun

    2013-08-01

    Full Text Available In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR to in situ monitor stack emissions. The proposed algorithm simultaneously compensates for nonlinear absorption and cross interference among different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The proposed algorithm is derived from a classical one and uses interference functions to quantify cross interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO serves as an example for the validation of the proposed algorithm. The interference functions in this case can be obtained by least-squares fitting with third-order polynomials. Experiments show that the results of cross interference correction are improved significantly by utilizing the fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new algorithm was embedded. The comparison of the two analyzers show that the prototype works well both within the linear and nonlinear ranges.

  9. Detecting the presence of a magnetic field under Gaussian and non-Gaussian noise by adaptive measurement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuan-Mei; Li, Jun-Gang, E-mail: jungl@bit.edu.cn; Zou, Jian

    2017-06-15

    Highlights: • Adaptive measurement strategy is used to detect the presence of a magnetic field. • Gaussian Ornstein–Uhlenbeck noise and non-Gaussian noise have been considered. • Weaker magnetic fields may be more easily detected than some stronger ones. - Abstract: By using the adaptive measurement method we study how to detect whether a weak magnetic field is actually present or not under Gaussian noise and non-Gaussian noise. We find that the adaptive measurement method can effectively improve the detection accuracy. For the case of Gaussian noise, we find the stronger the magnetic field strength, the easier for us to detect the magnetic field. Counterintuitively, for non-Gaussian noise, some weaker magnetic fields are more likely to be detected rather than some stronger ones. Finally, we give a reasonable physical interpretation.

  10. Photoplethysmogram signal quality estimation using repeated Gaussian filters and cross-correlation

    International Nuclear Information System (INIS)

    Karlen, W; Kobayashi, K; Dumont, G A; Ansermino, J M

    2012-01-01

    Pulse oximeters are monitors that noninvasively measure heart rate and blood oxygen saturation (SpO 2 ). Unfortunately, pulse oximetry is prone to artifacts which negatively impact the accuracy of the measurement and can cause a significant number of false alarms. We have developed an algorithm to segment pulse oximetry signals into pulses and estimate the signal quality in real time. The algorithm iteratively calculates a signal quality index (SQI) ranging from 0 to 100. In the presence of artifacts and irregular signal morphology, the algorithm outputs a low SQI number. The pulse segmentation algorithm uses the derivative of the signal to find pulse slopes and an adaptive set of repeated Gaussian filters to select the correct slopes. Cross-correlation of consecutive pulse segments is used to estimate signal quality. Experimental results using two different benchmark data sets showed a good pulse detection rate with a sensitivity of 96.21% and a positive predictive value of 99.22%, which was equivalent to the available reference algorithm. The novel SQI algorithm was effective and produced significantly lower SQI values in the presence of artifacts compared to SQI values during clean signals. The SQI algorithm may help to guide untrained pulse oximeter users and also help in the design of advanced algorithms for generating smart alarms. (paper)

  11. Photoplethysmogram signal quality estimation using repeated Gaussian filters and cross-correlation.

    Science.gov (United States)

    Karlen, W; Kobayashi, K; Ansermino, J M; Dumont, G A

    2012-10-01

    Pulse oximeters are monitors that noninvasively measure heart rate and blood oxygen saturation (SpO2). Unfortunately, pulse oximetry is prone to artifacts which negatively impact the accuracy of the measurement and can cause a significant number of false alarms. We have developed an algorithm to segment pulse oximetry signals into pulses and estimate the signal quality in real time. The algorithm iteratively calculates a signal quality index (SQI) ranging from 0 to 100. In the presence of artifacts and irregular signal morphology, the algorithm outputs a low SQI number. The pulse segmentation algorithm uses the derivative of the signal to find pulse slopes and an adaptive set of repeated Gaussian filters to select the correct slopes. Cross-correlation of consecutive pulse segments is used to estimate signal quality. Experimental results using two different benchmark data sets showed a good pulse detection rate with a sensitivity of 96.21% and a positive predictive value of 99.22%, which was equivalent to the available reference algorithm. The novel SQI algorithm was effective and produced significantly lower SQI values in the presence of artifacts compared to SQI values during clean signals. The SQI algorithm may help to guide untrained pulse oximeter users and also help in the design of advanced algorithms for generating smart alarms.

  12. Another higher order Langevin algorithm for QCD

    International Nuclear Information System (INIS)

    Kronfeld, A.S.

    1986-01-01

    This note provides an algorithm for integrating the Langevin equation which is second order. It introduces a term into the drift force which is a product of the Gaussian noise and a second derivative of the action. The specific application presented here is for nonabelian gauge theories interacting with fermions, e.g. QCD, for which it requires less memory than the Runge-Kutta algorithm of the same order. The memory and computational requirements of Euler, Runge-Kutta, and the present algorithm are compared. (orig.)

  13. Geometry of Gaussian quantum states

    International Nuclear Information System (INIS)

    Link, Valentin; Strunz, Walter T

    2015-01-01

    We study the Hilbert–Schmidt measure on the manifold of mixed Gaussian states in multi-mode continuous variable quantum systems. An analytical expression for the Hilbert–Schmidt volume element is derived. Its corresponding probability measure can be used to study typical properties of Gaussian states. It turns out that although the manifold of Gaussian states is unbounded, an ensemble of Gaussian states distributed according to this measure still has a normalizable distribution of symplectic eigenvalues, from which unitarily invariant properties can be obtained. By contrast, we find that for an ensemble of one-mode Gaussian states based on the Bures measure the corresponding distribution cannot be normalized. As important applications, we determine the distribution and the mean value of von Neumann entropy and purity for the Hilbert–Schmidt measure. (paper)

  14. A Network of Kalman Filters for MAI and ISI Compensation in a Non-Gaussian Environment

    Directory of Open Access Journals (Sweden)

    Sayadi Bessem

    2005-01-01

    Full Text Available This paper develops a new multiuser detector based on a network of kalman filters (NKF dealing with multiple access-interference (MAI, intersymbol Interference (ISI, and an impulsive observation noise. The two proposed schemes are based on the modeling of the DS-CDMA system by a discrete-time linear system that has non-Gaussian state and measurement noises. By approximating the non-Gaussian densities of the noises by a weighted sum of Gaussian terms and under the common MMSE estimation criterion, we first derive an NKF detector. This version is further optimized by introducing a feedback exploiting the ISI interference structure. The resulting scheme is an NKF detector based on a likelihood ratio test (LRT. Monte-Carlo simulations have shown that the NKF and the NKF based on LRT detectors significantly improve the efficiency and the performance of the classical Kalman algorithm.

  15. Quantum steering of multimode Gaussian states by Gaussian measurements: monogamy relations and the Peres conjecture

    International Nuclear Information System (INIS)

    Ji, Se-Wan; Nha, Hyunchul; Kim, M S

    2015-01-01

    It is a topic of fundamental and practical importance how a quantum correlated state can be reliably distributed through a noisy channel for quantum information processing. The concept of quantum steering recently defined in a rigorous manner is relevant to study it under certain circumstances and here we address quantum steerability of Gaussian states to this aim. In particular, we attempt to reformulate the criterion for Gaussian steering in terms of local and global purities and show that it is sufficient and necessary for the case of steering a 1-mode system by an N-mode system. It subsequently enables us to reinforce a strong monogamy relation under which only one party can steer a local system of 1-mode. Moreover, we show that only a negative partial-transpose state can manifest quantum steerability by Gaussian measurements in relation to the Peres conjecture. We also discuss our formulation for the case of distributing a two-mode squeezed state via one-way quantum channels making dissipation and amplification effects, respectively. Finally, we extend our approach to include non-Gaussian measurements, more precisely, all orders of higher-order squeezing measurements, and find that this broad set of non-Gaussian measurements is not useful to demonstrate steering for Gaussian states beyond Gaussian measurements. (paper)

  16. A Cooperative Framework for Fireworks Algorithm

    OpenAIRE

    Zheng, Shaoqiu; Li, Junzhi; Janecek, Andreas; Tan, Ying

    2015-01-01

    This paper presents a cooperative framework for fireworks algorithm (CoFFWA). A detailed analysis of existing fireworks algorithm (FWA) and its recently developed variants has revealed that (i) the selection strategy lead to the contribution of the firework with the best fitness (core firework) for the optimization overwhelms the contributions of the rest of fireworks (non-core fireworks) in the explosion operator, (ii) the Gaussian mutation operator is not as effective as it is designed to b...

  17. Treatment of non-Gaussian tails of multiple Coulomb scattering in track fitting with a Gaussian-sum filter

    International Nuclear Information System (INIS)

    Strandlie, A.; Wroldsen, J.

    2006-01-01

    If any of the probability densities involved in track fitting deviate from the Gaussian assumption, it is plausible that a non-linear estimator which better takes the actual shape of the distribution into account can do better. One such non-linear estimator is the Gaussian-sum filter, which is adequate if the distributions under consideration can be approximated by Gaussian mixtures. The main purpose of this paper is to present a Gaussian-sum filter for track fitting, based on a two-component approximation of the distribution of angular deflections due to multiple scattering. In a simulation study within a linear track model the Gaussian-sum filter is shown to be a competitive alternative to the Kalman filter. Scenarios at various momenta and with various maximum number of components in the Gaussian-sum filter are considered. Particularly at low momenta the Gaussian-sum filter yields a better estimate of the uncertainties than the Kalman filter, and it is also slightly more precise than the latter

  18. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...... result is used to study functional summaries for log Gaussian Cox processes....

  19. High-Order Local Pooling and Encoding Gaussians Over a Dictionary of Gaussians.

    Science.gov (United States)

    Li, Peihua; Zeng, Hui; Wang, Qilong; Shiu, Simon C K; Zhang, Lei

    2017-07-01

    Local pooling (LP) in configuration (feature) space proposed by Boureau et al. explicitly restricts similar features to be aggregated, which can preserve as much discriminative information as possible. At the time it appeared, this method combined with sparse coding achieved competitive classification results with only a small dictionary. However, its performance lags far behind the state-of-the-art results as only the zero-order information is exploited. Inspired by the success of high-order statistical information in existing advanced feature coding or pooling methods, we make an attempt to address the limitation of LP. To this end, we present a novel method called high-order LP (HO-LP) to leverage the information higher than the zero-order one. Our idea is intuitively simple: we compute the first- and second-order statistics per configuration bin and model them as a Gaussian. Accordingly, we employ a collection of Gaussians as visual words to represent the universal probability distribution of features from all classes. Our problem is naturally formulated as encoding Gaussians over a dictionary of Gaussians as visual words. This problem, however, is challenging since the space of Gaussians is not a Euclidean space but forms a Riemannian manifold. We address this challenge by mapping Gaussians into the Euclidean space, which enables us to perform coding with common Euclidean operations rather than complex and often expensive Riemannian operations. Our HO-LP preserves the advantages of the original LP: pooling only similar features and using a small dictionary. Meanwhile, it achieves very promising performance on standard benchmarks, with either conventional, hand-engineered features or deep learning-based features.

  20. A study of the physical factors affecting air pollution dispersion in Helwan

    International Nuclear Information System (INIS)

    Megahed, A.A.

    1992-01-01

    Air pollution is considered as one of the most important environmental problems facing the humanity. Cement industry, usually, is responsible for building high levels of pollutants. The present research focused on the study of air pollution control of cement industry using mathematical modeling. A mathematical dispersion model was developed based on Gaussian distribution where the dispersion parameters increase with increasing atmospheric turbulence. The Gaussian equation takes in consideration the effect of emission rates. stack height, buoyant plume rise, weather and meteorological parameters. The model was tested for different stack heights, wind speeds. And atmospheric stability classes. Maximum ground level concentration of cement pollutants were measured in different locations of Helwan, south Cairo around the cement factories. Analysis of results shows that the ground level of pollutants concentrations are inversely proportional to wind speed and atmospheric stability classes. Stack height also affects the behaviour of deposition of cement particulates. The model results show satisfactory agreement with the measured concentrations. 6 figs

  1. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  2. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    Directory of Open Access Journals (Sweden)

    Zhu Xiao

    2016-05-01

    Full Text Available In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS, is proposed, which enables vehicle state estimation (VSE with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  3. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  4. Dispersion of some fission radionuclides during routine releases from ETRR-2 reactor

    International Nuclear Information System (INIS)

    Essa, S.M.K.; Mayhoub, A.B.; Mubarak, F.; Abedel Fattah, A.T.; Atia, S.

    2005-01-01

    One of the most important parameters in plume dispersion modeling is the plume growth (dispersion coefficients δ). Different models for estimating dispersion parameters are discussed to establish the relative importance of one over the others. Comparisons were made between power-law function, standard, split sigma, and split sigma theta methods. We use the double Gaussian expression for calculating concentration in this comparison. The results show that, with low wind speed (<2 m/s) split sigma and split sigma theta methods give much better results than other methods. While with wind speed greater than 2 m/s the power-law functions methods give more plausible results

  5. Breaking Gaussian incompatibility on continuous variable quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Kiukas, Jukka, E-mail: jukka.kiukas@aber.ac.uk [Department of Mathematics, Aberystwyth University, Penglais, Aberystwyth, SY23 3BZ (United Kingdom); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy)

    2015-08-15

    We characterise Gaussian quantum channels that are Gaussian incompatibility breaking, that is, transform every set of Gaussian measurements into a set obtainable from a joint Gaussian observable via Gaussian postprocessing. Such channels represent local noise which renders measurements useless for Gaussian EPR-steering, providing the appropriate generalisation of entanglement breaking channels for this scenario. Understanding the structure of Gaussian incompatibility breaking channels contributes to the resource theory of noisy continuous variable quantum information protocols.

  6. Fluctuation Scaling, Calibration of Dispersion, and Detection of Differences.

    Science.gov (United States)

    Holland, Rianne; Rebmann, Roman; Williams, Craig; Hanley, Quentin S

    2017-11-07

    Fluctuation scaling describes the relationship between the mean and standard deviation of a set of measurements. An example is Horwitz scaling, which has been reported from interlaboratory studies. Horwitz and similar studies have reported simple exponential and segmented scaling laws with exponents (α) typically between 0.85 (Horwitz) and 1 when not operating near a detection limit. When approaching a detection limit, the exponents change and approach an apparently Gaussian (α = 0) model. This behavior is often presented as a property of interlaboratory studies, which makes controlled replication to understand the behavior costly to perform. To assess the contribution of instrumentation to larger scale fluctuation scaling, we measured the behavior of two inductively coupled plasma atomic emission spectrometry (ICP-AES) systems, in two laboratories measuring thulium using two emission lines. The standard deviation universally increased with the uncalibrated signal, indicating the system was heteroscedastic. The response from all lines and both instruments was consistent with a single exponential dispersion model having parameters α = 1.09 and β = 0.0035. No evidence of Horwitz scaling was found, and there was no evidence of Poisson noise limiting behavior. The "Gaussian" component was a consequence of background subtraction for all lines and both instruments. The observation of a simple exponential dispersion model in the data allows for the definition of a difference detection limit (DDL) with universal applicability to systems following known dispersion. The DDL is the minimum separation between two points along a dispersion model required to claim they are different according to a particular statistical test. The DDL scales transparently with the mean and works at any location in a response function.

  7. Kalman filtration of radiation monitoring data from atmospheric dispersion of radioactive materials

    DEFF Research Database (Denmark)

    Drews, M.; Lauritzen, B.; Madsen, H.

    2004-01-01

    A Kalman filter method using off-site radiation monitoring data is proposed as a tool for on-line estimation of the source term for short-range atmospheric dispersion of radioactive materials. The method is based on the Gaussian plume model, in which the plume parameters including the source term...

  8. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  9. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  10. Threshold Multi Split-Row algorithm for decoding irregular LDPC codes

    Directory of Open Access Journals (Sweden)

    Chakir Aqil

    2017-12-01

    Full Text Available In this work, we propose a new threshold multi split-row algorithm in order to improve the multi split-row algorithm for LDPC irregular codes decoding. We give a complete description of our algorithm as well as its advantages for the LDPC codes. The simulation results over an additive white gaussian channel show that an improvement in code error performance between 0.4 dB and 0.6 dB compared to the multi split-row algorithm.

  11. Atmospheric Dispersion Models for the Calculation of Environmental Impact: A Comparative Study

    International Nuclear Information System (INIS)

    Caputo, Marcelo; Gimenez, Marcelo; Felicelli, Sergio; Schlamp, Miguel

    2000-01-01

    In this paper some new comparisons are presented between the codes AERMOD, HPDM and HYSPLIT.The first two are Gaussian stationary plume codes and they were developed to calculate environmental impact produced by chemical contaminants.HYSPLIT is a hybrid code because it uses a Lagrangian reference system to describe the transport of a puff center of mass and uses an Eulerian system to describe the dispersion within the puff.The meteorological and topographic data used in the present work were obtained from runs of the prognostic code RAMS, provided by NOAA. The emission was fixed in 0.3 g/s , 284 K and 0 m/s .The surface rough was fixed in 0.1m and flat terrain was considered.In order to analyze separate effects and to go deeper in the comparison, the meteorological data was split into two, depending on the atmospheric stability class (F to B), and the wind direction was fixed to neglect its contribution to the contaminant dispersion.The main contribution of this work is to provide recommendations about the validity range of each code depending on the model used.In the case of Gaussian models the validity range is fixed by the distance in which the atmospheric condition can be consider homogeneous.In the other hand the validity range of HYSPLIT's model is determined by the spatial extension of the meteorological data.The results obtained with the three codes are comparable if the emission is in equilibrium with the environment.This means that the gases were emitted at the same temperature of the medium with zero velocity.There was an important difference between the dispersion parameters used by the Gaussian codes

  12. Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique

    Science.gov (United States)

    Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen

    2015-01-01

    Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.

  13. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Jardak, Seifallah

    2014-09-01

    Correlated waveforms have a number of applications in different fields, such as radar and communication. It is very easy to generate correlated waveforms using infinite alphabets, but for some of the applications, it is very challenging to use them in practice. Moreover, to generate infinite alphabet constant envelope correlated waveforms, the available research uses iterative algorithms, which are computationally very expensive. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method map the Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability-density-function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. To generate equiprobable symbols, the area of each region is kept same. If the requirement is to have each symbol with its own unique probability, the proposed scheme allows us that as well. Although, the proposed scheme is general, the main focus of this paper is to generate finite alphabet waveforms for multiple-input multiple-output radar, where correlated waveforms are used to achieve desired beampatterns. © 2014 IEEE.

  14. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  15. Atmospheric dispersion calculations in a low mountain area

    International Nuclear Information System (INIS)

    Schmid, S.

    1987-01-01

    The applicability of the Gaussian model for assessing the short-range environmental exposure from an emission source in a topographically inhomogeneous terrain is tested. An atmospheric dispersion model of general applicability is used, which is based on results of hydrodynamic flow models. Approaches for turbulence and radiation parameterization are tested by means of a vertically one-dimensional flow model. In order to introduce the effects of the topography in the boundary-layer simulations, the three-dimensional mesoscale model (Ulrich) is applied. The two models are verified by way of episode simulation using wind profile measurements. The differences in the models' results are to show the topographic influence. The calculated flow fields serve as input to a randomwalk model applied for calculating ground-level concentration fields in the vicinity of an emission source. The Gaussian model underestimates the pollution under stable conditions. Convectivity conditions may change the effective source hight through vertical effects caused by orography which, depending on the direction of free flow, leads to an increase or decrease of pollutant concentration at ground level. Applying the more complex dispersion model, the concentration maxima under stable conditions are closer to the source by a factor five, and under unstable conditions about one and a half times more remote. (orig./HP) [de

  16. Learning non-Gaussian Time Series using the Box-Cox Gaussian Process

    OpenAIRE

    Rios, Gonzalo; Tobar, Felipe

    2018-01-01

    Gaussian processes (GPs) are Bayesian nonparametric generative models that provide interpretability of hyperparameters, admit closed-form expressions for training and inference, and are able to accurately represent uncertainty. To model general non-Gaussian data with complex correlation structure, GPs can be paired with an expressive covariance kernel and then fed into a nonlinear transformation (or warping). However, overparametrising the kernel and the warping is known to, respectively, hin...

  17. Numerical modeling of macrodispersion in heterogeneous media: a comparison of multi-Gaussian and non-multi-Gaussian models

    Science.gov (United States)

    Wen, Xian-Huan; Gómez-Hernández, J. Jaime

    1998-03-01

    The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than

  18. Non-Gaussian halo assembly bias

    International Nuclear Information System (INIS)

    Reid, Beth A.; Verde, Licia; Dolag, Klaus; Matarrese, Sabino; Moscardini, Lauro

    2010-01-01

    The strong dependence of the large-scale dark matter halo bias on the (local) non-Gaussianity parameter, f NL , offers a promising avenue towards constraining primordial non-Gaussianity with large-scale structure surveys. In this paper, we present the first detection of the dependence of the non-Gaussian halo bias on halo formation history using N-body simulations. We also present an analytic derivation of the expected signal based on the extended Press-Schechter formalism. In excellent agreement with our analytic prediction, we find that the halo formation history-dependent contribution to the non-Gaussian halo bias (which we call non-Gaussian halo assembly bias) can be factorized in a form approximately independent of redshift and halo mass. The correction to the non-Gaussian halo bias due to the halo formation history can be as large as 100%, with a suppression of the signal for recently formed halos and enhancement for old halos. This could in principle be a problem for realistic galaxy surveys if observational selection effects were to pick galaxies occupying only recently formed halos. Current semi-analytic galaxy formation models, for example, imply an enhancement in the expected signal of ∼ 23% and ∼ 48% for galaxies at z = 1 selected by stellar mass and star formation rate, respectively

  19. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    Science.gov (United States)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  20. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.

  1. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    Science.gov (United States)

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  2. Transport and Dispersion of Nanoparticles in Periodic Nanopost Arrays

    KAUST Repository

    He, Kai; Retterer, Scott T.; Srijanto, Bernadeta R.; Conrad, Jacinta C.; Krishnamoorti, Ramanan

    2014-01-01

    Nanoparticles transported through highly confined porous media exhibit faster breakthrough than small molecule tracers. Despite important technological applications in advanced materials, human health, energy, and environment, the microscale mechanisms leading to early breakthrough have not been identified. Here, we measure dispersion of nanoparticles at the single-particle scale in regular arrays of nanoposts and show that for highly confined flows of dilute suspensions of nanoparticles the longitudinal and transverse velocities exhibit distinct scaling behaviors. The distributions of transverse particle velocities become narrower and more non-Gaussian when the particles are strongly confined. As a result, the transverse dispersion of highly confined nanoparticles at low Péclet numbers is significantly less important than longitudinal dispersion, leading to early breakthrough. This finding suggests a fundamental mechanism by which to control dispersion and thereby improve efficacy of nanoparticles applied for advanced polymer nanocomposites, drug delivery, hydrocarbon production, and environmental remediation. © 2014 American Chemical Society.

  3. Transport and Dispersion of Nanoparticles in Periodic Nanopost Arrays

    KAUST Repository

    He, Kai

    2014-05-27

    Nanoparticles transported through highly confined porous media exhibit faster breakthrough than small molecule tracers. Despite important technological applications in advanced materials, human health, energy, and environment, the microscale mechanisms leading to early breakthrough have not been identified. Here, we measure dispersion of nanoparticles at the single-particle scale in regular arrays of nanoposts and show that for highly confined flows of dilute suspensions of nanoparticles the longitudinal and transverse velocities exhibit distinct scaling behaviors. The distributions of transverse particle velocities become narrower and more non-Gaussian when the particles are strongly confined. As a result, the transverse dispersion of highly confined nanoparticles at low Péclet numbers is significantly less important than longitudinal dispersion, leading to early breakthrough. This finding suggests a fundamental mechanism by which to control dispersion and thereby improve efficacy of nanoparticles applied for advanced polymer nanocomposites, drug delivery, hydrocarbon production, and environmental remediation. © 2014 American Chemical Society.

  4. Area of isodensity contours in Gaussian and non-Gaussian fields

    International Nuclear Information System (INIS)

    Ryden, B.S.

    1988-01-01

    The area of isodensity contours in a smoothed density field can be measured by the contour-crossing statistic N1, the number of times per unit length that a line drawn through the density field pierces an isodensity contour. The contour-crossing statistic distinguishes between Gaussian and non-Gaussian fields and provides a measure of the effective slope of the power spectrum. The statistic is easy to apply and can be used on pencil beams and slices as well as on a three-dimensional field. 10 references

  5. IBS for non-gaussian distributions

    International Nuclear Information System (INIS)

    Fedotov, A.; Sidorin, A.O.; Smirnov, A.V.

    2010-01-01

    In many situations distribution can significantly deviate from Gaussian which requires accurate treatment of IBS. Our original interest in this problem was motivated by the need to have an accurate description of beam evolution due to IBS while distribution is strongly affected by the external electron cooling force. A variety of models with various degrees of approximation were developed and implemented in BETACOOL in the past to address this topic. A more complete treatment based on the friction coefficient and full 3-D diffusion tensor was introduced in BETACOOL at the end of 2007 under the name 'local IBS model'. Such a model allowed us calculation of IBS for an arbitrary beam distribution. The numerical benchmarking of this local IBS algorithm and its comparison with other models was reported before. In this paper, after briefly describing the model and its limitations, they present its comparison with available experimental data.

  6. Modelling Inverse Gaussian Data with Censored Response Values: EM versus MCMC

    Directory of Open Access Journals (Sweden)

    R. S. Sparks

    2011-01-01

    Full Text Available Low detection limits are common in measure environmental variables. Building models using data containing low or high detection limits without adjusting for the censoring produces biased models. This paper offers approaches to estimate an inverse Gaussian distribution when some of the data used are censored because of low or high detection limits. Adjustments for the censoring can be made if there is between 2% and 20% censoring using either the EM algorithm or MCMC. This paper compares these approaches.

  7. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    Science.gov (United States)

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  8. Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain.

    Science.gov (United States)

    Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D

    2013-01-01

    In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.

  9. Optical Coherence Tomography Noise Reduction Using Anisotropic Local Bivariate Gaussian Mixture Prior in 3D Complex Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Hossein Rabbani

    2013-01-01

    Full Text Available In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.

  10. Soft Sensor Modeling Based on Multiple Gaussian Process Regression and Fuzzy C-mean Clustering

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available In order to overcome the difficulties of online measurement of some crucial biochemical variables in fermentation processes, a new soft sensor modeling method is presented based on the Gaussian process regression and fuzzy C-mean clustering. With the consideration that the typical fermentation process can be distributed into 4 phases including lag phase, exponential growth phase, stable phase and dead phase, the training samples are classified into 4 subcategories by using fuzzy C- mean clustering algorithm. For each sub-category, the samples are trained using the Gaussian process regression and the corresponding soft-sensing sub-model is established respectively. For a new sample, the membership between this sample and sub-models are computed based on the Euclidean distance, and then the prediction output of soft sensor is obtained using the weighting sum. Taking the Lysine fermentation as example, the simulation and experiment are carried out and the corresponding results show that the presented method achieves better fitting and generalization ability than radial basis function neutral network and single Gaussian process regression model.

  11. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  12. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  13. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    Science.gov (United States)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  14. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  15. Theoretical analysis of non-Gaussian heterogeneity effects on subsurface flow and transport

    Science.gov (United States)

    Riva, Monica; Guadagnini, Alberto; Neuman, Shlomo P.

    2017-04-01

    upon perturbing K=eY to second order in σY even as the corresponding series diverges. Our analysis is rendered mathematically tractable by considering mean-uniform steady state flow in an unbounded, two-dimensional domain of mildly heterogeneous Y with a single-scale function G having an isotropic exponential covariance. Results consist of expressions for (a) lead-order autocovariance and cross-covariance functions of hydraulic head, velocity, and advective particle displacement and (b) analogues of preasymptotic as well as asymptotic Fickian dispersion coefficients. We compare these theoretically and graphically with corresponding expressions developed in the literature for Gaussian Y. We find the former to differ from the latter by a factor k = /2 ( denoting ensemble expectation) and the GSG covariance of longitudinal velocity to contain an additional nugget term depending on this same factor. In the limit as Y becomes Gaussian, k reduces to one and the nugget term drops out.

  16. Relationship among the Voigt integral and the dispersion function of plasma. Additional methods for estimating the Voigt integral

    International Nuclear Information System (INIS)

    Jimenez D, H.; Flores L, H.; Cabral P, A.; Bravo O, A.

    1990-05-01

    A relationship among the Lorentzian-Gaussian profile convolution and the Plasma Dispersion Function is presented; thus, the methods available to calculate the latter serve also to calculate the Voigt profile. (Author)

  17. Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach

    Science.gov (United States)

    Reznichenko, A. V.; Terekhov, I. S.

    2018-04-01

    We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.

  18. Evaluation of different numerical methodologies for dispersion of air pollutants in an urban environment

    International Nuclear Information System (INIS)

    Mumovic, D.; Crowther, J.M.; Stevanovic, Z.

    2003-01-01

    Since 1950 the world population has more than doubled but meanwhile the global number of cars has increased by a factor of 10. In that same period the fraction of people living in urban areas has increased by a factor of 4. Apart from large point-sources of local air pollution, traffic induced pollution is now the most significant contributor to urban air quality in city centres, particularly for carbon monoxide, oxides of nitrogen and fine particulate matter. Until recently, pollutant dispersion in urban areas has usually been numerically investigated by using empirical models, such as the Gaussian plume model, or by extensions of this technique to line sources and multiple sources. More recently, advanced computational fluid dynamics (CFD) simulations have been attempted but have been mainly two-dimensional and often encompassing only a single street canyon. This paper provides a comprehensive, critical evaluation of dispersion of pollutants in urban areas. A three-dimensional flow model has been set-up for a staggered crossroad, using the Navier-Stokes equations and the conservation equation for species concentration. The effect of using several different turbulence models, including the k-ε model, modifications and extensions, has been investigated. Cartesian coordinates have been used in connection with the Partial Solution Algorithm (PARSOL) and Body Fitted Coordinates (BFC). The effects of several different numerical algorithms for discretization of differential equations have also been studied. More than thirty cases are analysed, and the main results are compared with wind tunnel experiments. The numerical results are presented as non-dimensional values to facilitate comparison between experimental and numerical studies. It has been shown that the numerical studies have been able to simulate the air-flow in urban areas and confirm, qualitatively, the previous field observations and wind tunnel results. This success encouraged the authors to extend such

  19. Evaluation of different numerical methodologies for dispersion of air pollutants in an urban environment

    Energy Technology Data Exchange (ETDEWEB)

    Mumovic, D.; Crowther, J.M. [Glasgow Caledonian Univ., School of Built and Natural Environment, Glasgow (United Kingdom)]. E-mail: dmumov10@caledonian.ac.uk; Stevanovic, Z. [Univ. of Belgrade, Inst. of Nuclear Sciences, Belgrade (Serbia and Montenegro)

    2003-07-01

    Since 1950 the world population has more than doubled but meanwhile the global number of cars has increased by a factor of 10. In that same period the fraction of people living in urban areas has increased by a factor of 4. Apart from large point-sources of local air pollution, traffic induced pollution is now the most significant contributor to urban air quality in city centres, particularly for carbon monoxide, oxides of nitrogen and fine particulate matter. Until recently, pollutant dispersion in urban areas has usually been numerically investigated by using empirical models, such as the Gaussian plume model, or by extensions of this technique to line sources and multiple sources. More recently, advanced computational fluid dynamics (CFD) simulations have been attempted but have been mainly two-dimensional and often encompassing only a single street canyon. This paper provides a comprehensive, critical evaluation of dispersion of pollutants in urban areas. A three-dimensional flow model has been set-up for a staggered crossroad, using the Navier-Stokes equations and the conservation equation for species concentration. The effect of using several different turbulence models, including the k-{epsilon} model, modifications and extensions, has been investigated. Cartesian coordinates have been used in connection with the Partial Solution Algorithm (PARSOL) and Body Fitted Coordinates (BFC). The effects of several different numerical algorithms for discretization of differential equations have also been studied. More than thirty cases are analysed, and the main results are compared with wind tunnel experiments. The numerical results are presented as non-dimensional values to facilitate comparison between experimental and numerical studies. It has been shown that the numerical studies have been able to simulate the air-flow in urban areas and confirm, qualitatively, the previous field observations and wind tunnel results. This success encouraged the authors to extend

  20. SNPMClust: Bivariate Gaussian Genotype Clustering and Calling for Illumina Microarrays

    Directory of Open Access Journals (Sweden)

    Stephen W. Erickson

    2016-07-01

    Full Text Available SNPMClust is an R package for genotype clustering and calling with Illumina microarrays. It was originally developed for studies using the GoldenGate custom genotyping platform but can be used with other Illumina platforms, including Infinium BeadChip. The algorithm first rescales the fluorescent signal intensity data, adds empirically derived pseudo-data to minor allele genotype clusters, then uses the package mclust for bivariate Gaussian model fitting. We compared the accuracy and sensitivity of SNPMClust to that of GenCall, Illumina's proprietary algorithm, on a data set of 94 whole-genome amplified buccal (cheek swab DNA samples. These samples were genotyped on a custom panel which included 1064 SNPs for which the true genotype was known with high confidence. SNPMClust produced uniformly lower false call rates over a wide range of overall call rates.

  1. Statistical trajectory of an approximate EM algorithm for probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Titterington, D M

    2007-01-01

    We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

  2. Segmentation of Concealed Objects in Passive Millimeter-Wave Images Based on the Gaussian Mixture Model

    Science.gov (United States)

    Yu, Wangyang; Chen, Xiangguang; Wu, Lei

    2015-04-01

    Passive millimeter wave (PMMW) imaging has become one of the most effective means to detect the objects concealed under clothing. Due to the limitations of the available hardware and the inherent physical properties of PMMW imaging systems, images often exhibit poor contrast and low signal-to-noise ratios. Thus, it is difficult to achieve ideal results by using a general segmentation algorithm. In this paper, an advanced Gaussian Mixture Model (GMM) algorithm for the segmentation of concealed objects in PMMW images is presented. Our work is concerned with the fact that the GMM is a parametric statistical model, which is often used to characterize the statistical behavior of images. Our approach is three-fold: First, we remove the noise from the image using both a notch reject filter and a total variation filter. Next, we use an adaptive parameter initialization GMM algorithm (APIGMM) for simulating the histogram of images. The APIGMM provides an initial number of Gaussian components and start with more appropriate parameter. Bayesian decision is employed to separate the pixels of concealed objects from other areas. At last, the confidence interval (CI) method, alongside local gradient information, is used to extract the concealed objects. The proposed hybrid segmentation approach detects the concealed objects more accurately, even compared to two other state-of-the-art segmentation methods.

  3. Response moments of dynamic systems under non-Gaussian random excitation by the equivalent non-Gaussian excitation method

    International Nuclear Information System (INIS)

    Tsuchida, Takahiro; Kimura, Koji

    2016-01-01

    Equivalent non-Gaussian excitation method is proposed to obtain the response moments up to the 4th order of dynamic systems under non-Gaussian random excitation. The non-Gaussian excitation is prescribed by the probability density and the power spectrum, and is described by an Ito stochastic differential equation. Generally, moment equations for the response, which are derived from the governing equations for the excitation and the system, are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation even though the system is linear. In the equivalent non-Gaussian excitation method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by a quadratic polynomial. In numerical examples, a linear system subjected to nonGaussian excitations with bimodal and Rayleigh distributions is analyzed by using the present method. The results show that the method yields the variance, skewness and kurtosis of the response with high accuracy for non-Gaussian excitation with the widely different probability densities and bandwidth. The statistical moments of the equivalent non-Gaussian excitation are also investigated to describe the feature of the method. (paper)

  4. Analytic matrix elements with shifted correlated Gaussians

    DEFF Research Database (Denmark)

    Fedorov, D. V.

    2017-01-01

    Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics.......Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics....

  5. Semiparametric inference on the fractal index of Gaussian and conditionally Gaussian time series data

    DEFF Research Database (Denmark)

    Bennedsen, Mikkel

    Using theory on (conditionally) Gaussian processes with stationary increments developed in Barndorff-Nielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on the fractal index, α, of a time series. Our setup encompasses a large class of Gaussian...

  6. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  7. Equivalent non-Gaussian excitation method for response moment calculation of systems under non-Gaussian random excitation

    International Nuclear Information System (INIS)

    Tsuchida, Takahiro; Kimura, Koji

    2015-01-01

    Equivalent non-Gaussian excitation method is proposed to obtain the moments up to the fourth order of the response of systems under non-Gaussian random excitation. The excitation is prescribed by the probability density and power spectrum. Moment equations for the response can be derived from the stochastic differential equations for the excitation and the system. However, the moment equations are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation. In the proposed method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by the second-order polynomial. In order to demonstrate the validity of the method, a linear system to non-Gaussian excitation with generalized Gaussian distribution is analyzed. The results show the method is applicable to non-Gaussian excitation with the widely different kurtosis and bandwidth. (author)

  8. Entanglement in Gaussian matrix-product states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Ericsson, Marie

    2006-01-01

    Gaussian matrix-product states are obtained as the outputs of projection operations from an ancillary space of M infinitely entangled bonds connecting neighboring sites, applied at each of N sites of a harmonic chain. Replacing the projections by associated Gaussian states, the building blocks, we show that the entanglement range in translationally invariant Gaussian matrix-product states depends on how entangled the building blocks are. In particular, infinite entanglement in the building blocks produces fully symmetric Gaussian states with maximum entanglement range. From their peculiar properties of entanglement sharing, a basic difference with spin chains is revealed: Gaussian matrix-product states can possess unlimited, long-range entanglement even with minimum number of ancillary bonds (M=1). Finally we discuss how these states can be experimentally engineered from N copies of a three-mode building block and N two-mode finitely squeezed states

  9. Non-Gaussian Stochastic Radiation Transfer in Finite Planar Media with Quadratic Scattering

    International Nuclear Information System (INIS)

    Sallah, M.

    2016-01-01

    The stochastic radiation transfer is considered in a participating planar finite continuously fluctuating medium characterized by non-Gaussian variability. The problem is considered for diffuse-reflecting boundaries with quadratic Rayleigh scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions that are represented by the probability-density function (PDF) of the solution process. RVT algorithm applies a simple integral transformation to the input stochastic process (the extinction function of the medium). This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the radiation transfer equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity, transmissivity and partial heat fluxes at the medium boundaries. Numerical results are represented graphically for different non-Gaussian probability distribution functions that compared with the corresponding Gaussian PDF.

  10. Methods and Algorithms for Solving Inverse Problems for Fractional Advection-Dispersion Equations

    KAUST Repository

    Aldoghaither, Abeer

    2015-11-12

    Fractional calculus has been introduced as an e cient tool for modeling physical phenomena, thanks to its memory and hereditary properties. For example, fractional models have been successfully used to describe anomalous di↵usion processes such as contaminant transport in soil, oil flow in porous media, and groundwater flow. These models capture important features of particle transport such as particles with velocity variations and long-rest periods. Mathematical modeling of physical phenomena requires the identification of pa- rameters and variables from available measurements. This is referred to as an inverse problem. In this work, we are interested in studying theoretically and numerically inverse problems for space Fractional Advection-Dispersion Equation (FADE), which is used to model solute transport in porous media. Identifying parameters for such an equa- tion is important to understand how chemical or biological contaminants are trans- ported throughout surface aquifer systems. For instance, an estimate of the di↵eren- tiation order in groundwater contaminant transport model can provide information about soil properties, such as the heterogeneity of the medium. Our main contribution is to propose a novel e cient algorithm based on modulat-ing functions to estimate the coe cients and the di↵erentiation order for space FADE, which can be extended to general fractional Partial Di↵erential Equation (PDE). We also show how the method can be applied to the source inverse problem. This work is divided into two parts: In part I, the proposed method is described and studied through an extensive numerical analysis. The local convergence of the proposed two-stage algorithm is proven for 1D space FADE. The properties of this method are studied along with its limitations. Then, the algorithm is generalized to the 2D FADE. In part II, we analyze direct and inverse source problems for a space FADE. The problem consists of recovering the source term using final

  11. A simple relationship between the Voigt integral and the plasma dispersion function. Additional methods to estimate the Voigt integral

    International Nuclear Information System (INIS)

    Jimenez-Dominguez, H.; Flores-Llamas, H.; Cabral-Prieto, C.; Bravo-Ortega, A.

    1989-01-01

    A relationship is presented between the Lorentzian-Gaussian profile convolutions and the plasma dispersion function; thus, the methods available to calculate the latter serve also to calculate the Voigt profile. (orig.)

  12. Non-Gaussianity from isocurvature perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, Masahiro; Nakayama, Kazunori; Sekiguchi, Toyokazu; Suyama, Teruaki [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Takahashi, Fuminobu, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: nakayama@icrr.u-tokyo.ac.jp, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: suyama@icrr.u-tokyo.ac.jp, E-mail: fuminobu.takahashi@ipmu.jp [Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa 277-8568 (Japan)

    2008-11-15

    We develop a formalism for studying non-Gaussianity in both curvature and isocurvature perturbations. It is shown that non-Gaussianity in the isocurvature perturbation between dark matter and photons leaves distinct signatures in the cosmic microwave background temperature fluctuations, which may be confirmed in future experiments, or possibly even in the currently available observational data. As an explicit example, we consider the quantum chromodynamics axion and show that it can actually induce sizable non-Gaussianity for the inflationary scale, H{sub inf} = O(10{sup 9}-10{sup 11}) GeV.

  13. Handbook of Gaussian basis sets

    International Nuclear Information System (INIS)

    Poirier, R.; Kari, R.; Csizmadia, I.G.

    1985-01-01

    A collection of a large body of information is presented useful for chemists involved in molecular Gaussian computations. Every effort has been made by the authors to collect all available data for cartesian Gaussian as found in the literature up to July of 1984. The data in this text includes a large collection of polarization function exponents but in this case the collection is not complete. Exponents for Slater type orbitals (STO) were included for completeness. This text offers a collection of Gaussian exponents primarily without criticism. (Auth.)

  14. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.

    Science.gov (United States)

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-08-21

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.

  15. HGSYSTEMUF6, Simulating Dispersion Due to Atmospheric Release of Uranium Hexafluoride (UF6)

    International Nuclear Information System (INIS)

    Hanna, G; Chang, J.C.; Zhang, J.X.; Bloom, S.G.; Goode, W.D. Jr; Lombardi, D.A.; Yambert, M.W.

    2001-01-01

    1 - Description of program or function: HGSYSTEMUF6 is a suite of models designed for use in estimating consequences associated with accidental, atmospheric release of Uranium Hexafluoride (UF 6 ) and its reaction products, namely Hydrogen Fluoride (HF), and other non-reactive contaminants which are either negatively, neutrally, or positively buoyant. It is based on HGSYSTEM Version 3.0 of Shell Research LTD., and contains specific algorithms for the treatment of UF 6 chemistry and thermodynamics. HGSYSTEMUF6 contains algorithms for the treatment of dense gases, dry and wet deposition, effects due to the presence of buildings (canyon and wake), plume lift-off, and the effects of complex terrain. The models components of the suite include (1) AEROPLUME/RK, used to model near-field dispersion from pressurized two-phase jet releases of UF6 and its reaction products, (2) HEGADAS/UF6 for simulating dense, ground based release of UF 6 , (3) PGPLUME for simulation of passive, neutrally buoyant plumes (4) UF6Mixer for modeling warm, potentially reactive, ground-level releases of UF 6 from buildings, and (5) WAKE, used to model elevated and ground-level releases into building wake cavities of non-reactive plumes that are either neutrally or positively buoyant. 2 - Methods: The atmospheric release and transport of UF 6 is a complicated process involving the interaction between dispersion, chemical and thermodynamic processes. This process is characterized by four separate stages (flash, sublimation, chemical reaction entrainment and passive dispersion) in which one or more of these processes dominate. The various models contained in the suite are applicable to one or more of these stages. For example, for modeling reactive, multiphase releases of UF 6 , the AEROPLUME/RK component employs a process-splitting scheme which numerically integrates the differential equations governing dispersion, UF 6 chemistry, and thermodynamics. This algorithm is based on the assumption that

  16. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  17. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    Science.gov (United States)

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  18. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-06-01

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  19. EmpiriciSN: Re-sampling Observed Supernova/Host Galaxy Populations Using an XD Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Holoien, Thomas W.-S.; /Ohio State U., Dept. Astron. /Ohio State U., CCAPP /KIPAC, Menlo Park /SLAC; Marshall, Philip J.; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC

    2017-05-11

    We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.

  20. Searching for non-Gaussianity in the WMAP data

    International Nuclear Information System (INIS)

    Bernui, A.; Reboucas, M. J.

    2009-01-01

    Some analyses of recent cosmic microwave background (CMB) data have provided hints that there are deviations from Gaussianity in the WMAP CMB temperature fluctuations. Given the far-reaching consequences of such a non-Gaussianity for our understanding of the physics of the early universe, it is important to employ alternative indicators in order to determine whether the reported non-Gaussianity is of cosmological origin, and/or extract further information that may be helpful for identifying its causes. We propose two new non-Gaussianity indicators, based on skewness and kurtosis of large-angle patches of CMB maps, which provide a measure of departure from Gaussianity on large angular scales. A distinctive feature of these indicators is that they provide sky maps of non-Gaussianity of the CMB temperature data, thus allowing a possible additional window into their origins. Using these indicators, we find no significant deviation from Gaussianity in the three and five-year WMAP Internal Linear Combination (ILC) map with KQ75 mask, while the ILC unmasked map exhibits deviation from Gaussianity, quantifying therefore the WMAP team recommendation to employ the new mask KQ75 for tests of Gaussianity. We also use our indicators to test for Gaussianity the single frequency foreground unremoved WMAP three and five-year maps, and show that the K and Ka maps exhibit a clear indication of deviation from Gaussianity even with the KQ75 mask. We show that our findings are robust with respect to the details of the method.

  1. Solving Dynamic Traveling Salesman Problem Using Dynamic Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Stephen M. Akandwanaho

    2014-01-01

    Full Text Available This paper solves the dynamic traveling salesman problem (DTSP using dynamic Gaussian Process Regression (DGPR method. The problem of varying correlation tour is alleviated by the nonstationary covariance function interleaved with DGPR to generate a predictive distribution for DTSP tour. This approach is conjoined with Nearest Neighbor (NN method and the iterated local search to track dynamic optima. Experimental results were obtained on DTSP instances. The comparisons were performed with Genetic Algorithm and Simulated Annealing. The proposed approach demonstrates superiority in finding good traveling salesman problem (TSP tour and less computational time in nonstationary conditions.

  2. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  3. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  4. Comparison of Gaussian and non-Gaussian Atmospheric Profile Retrievals from Satellite Microwave Data

    Science.gov (United States)

    Kliewer, A.; Forsythe, J. M.; Fletcher, S. J.; Jones, A. S.

    2017-12-01

    The Cooperative Institute for Research in the Atmosphere at Colorado State University has recently developed two different versions of a mixed-distribution (lognormal combined with a Gaussian) based microwave temperature and mixing ratio retrieval system as well as the original Gaussian-based approach. These retrieval systems are based upon 1DVAR theory but have been adapted to use different descriptive statistics of the lognormal distribution to minimize the background errors. The input radiance data is from the AMSU-A and MHS instruments on the NOAA series of spacecraft. To help illustrate how the three retrievals are affected by the change in the distribution we are in the process of creating a new website to show the output from the different retrievals. Here we present initial results from different dynamical situations to show how the tool could be used by forecasters as well as for educators. However, as the new retrieved values are from a non-Gaussian based 1DVAR then they will display non-Gaussian behaviors that need to pass a quality control measure that is consistent with this distribution, and these new measures are presented here along with initial results for checking the retrievals.

  5. Current algorithms for computed electron beam dose planning

    International Nuclear Information System (INIS)

    Brahme, A.

    1985-01-01

    Two- and sometimes three-dimensional computer algorithms for electron beam irradiation are capable of taking all irregularities of the body cross-section and the properties of the various tissues into account. This is achieved by dividing the incoming broad beams into a number of narrow pencil beams, the penetration of which can be described by essentially one-dimensional formalisms. The constituent pencil beams are most often described by Gaussian, experimentally or theoretically derived distributions. The accuracy of different dose planning algorithms is discussed in some detail based on their ability to take the different physical interaction processes of high energy electrons into account. It is shown that those programs that take the deviations from the simple Gaussian model into account give the best agreement with experimental results. With such programs a dosimetric relative accuracy of about 5% is generally achieved except in the most complex inhomogeneity configurations. Finally, the present limitations and possible future developments of electron dose planning are discussed. (orig.)

  6. Modeling the generation and dispersion of odors from mushroom composting facilities

    International Nuclear Information System (INIS)

    Heinemann, P.; Wahanik, D.

    1998-01-01

    An odor source generation model and an odor dispersion model were developed to predict the local distribution of odors emanating from mushroom composting facilities. The odor source generation model allowed for simulation of various composting wharf configurations and odor source strengths. This model was linked to a Gaussian plume diffusion model that predicted odor dispersion. Dimethyl disulfide production at a rate of 1760 micrograms/h was simulated by the source generation model and six different atmospheric conditions were analyzed to demonstrate the effect of wind speed, atmospheric stability, and source generation on the dispersion of this odor producing compound. Detectable levels of dimethyl disulfide were predicted to range from less than 100 m from the source during very unstable conditions to almost 5000 m during very stable conditions

  7. A Novel Evolutionary Algorithm Inspired by Beans Dispersal

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhang

    2013-02-01

    Full Text Available Inspired by the transmission of beans in nature, a novel evolutionary algorithm-Bean Optimization Algorithm (BOA is proposed in this paper. BOA is mainly based on the normal distribution which is an important continuous probability distribution of quantitative phenomena. Through simulating the self-adaptive phenomena of plant, BOA is designed for solving continuous optimization problems. We also analyze the global convergence of BOA by using the Solis and Wetsarsquo; research results. The conclusion is that BOA can converge to the global optimization solution with probability one. In order to validate its effectiveness, BOA is tested against benchmark functions. And its performance is also compared with that of particle swarm optimization (PSO algorithm. The experimental results show that BOA has competitive performance to PSO in terms of accuracy and convergence speed on the explored tests and stands out as a promising alternative to existing optimization methods for engineering designs or applications.

  8. Gaussian operations and privacy

    International Nuclear Information System (INIS)

    Navascues, Miguel; Acin, Antonio

    2005-01-01

    We consider the possibilities offered by Gaussian states and operations for two honest parties, Alice and Bob, to obtain privacy against a third eavesdropping party, Eve. We first extend the security analysis of the protocol proposed in [Navascues et al. Phys. Rev. Lett. 94, 010502 (2005)]. Then, we prove that a generalized version of this protocol does not allow one to distill a secret key out of bound entangled Gaussian states

  9. Application of data assimilation to improve the forecasting capability of an atmospheric dispersion model for a radioactive plume

    International Nuclear Information System (INIS)

    Jeong, H.J.; Han, M.H.; Hwang, W.T.; Kim, E.H.

    2008-01-01

    Modeling an atmospheric dispersion of a radioactive plume plays an influential role in assessing the environmental impacts caused by nuclear accidents. The performance of data assimilation techniques combined with Gaussian model outputs and measurements to improve forecasting abilities are investigated in this study. Tracer dispersion experiments are performed to produce field data by assuming a radiological emergency. Adaptive neuro-fuzzy inference system (ANFIS) and linear regression filter are considered to assimilate the Gaussian model outputs with measurements. ANFIS is trained so that the model outputs are likely to be more accurate for the experimental data. Linear regression filter is designed to assimilate measurements similar to the ANFIS. It is confirmed that ANFIS could be an appropriate method for an improvement of the forecasting capability of an atmospheric dispersion model in the case of a radiological emergency, judging from the higher correlation coefficients between the measured and the assimilated ones rather than a linear regression filter. This kind of data assimilation method could support a decision-making system when deciding on the best available countermeasures for public health from among emergency preparedness alternatives

  10. The shape of velocity dispersion profiles and the dynamical state of galaxy clusters

    Science.gov (United States)

    Costa, A. P.; Ribeiro, A. L. B.; de Carvalho, R. R.

    2018-01-01

    Motivated by the existence of the relationship between the dynamical state of clusters and the shape of the velocity dispersion profiles (VDPs), we study the VDPs for Gaussian (G) and non-Gaussian (NG) systems for a subsample of clusters from the Yang catalogue. The groups cover a redshift interval of 0.03 ≤ z ≤ 0.1 with halo mass ≥1014 M⊙. We use a robust statistical method, Hellinger Distance, to classify the dynamical state of the systems according to their velocity distribution. The stacked VDP of each class, G and NG, is then determined using either Bright or Faint galaxies. The stacked VDP for G groups displays a central peak followed by a monotonically decreasing trend which indicates a predominance of radial orbits, with the Bright stacked VDP showing lower velocity dispersions in all radii. The distinct features we find in NG systems are manifested not only by the characteristic shape of VDP, with a depression in the central region, but also by a possible higher infall rate associated with galaxies in the Faint stacked VDP.

  11. A pooling-LiNGAM algorithm for effective connectivity analysis of fMRI data

    Directory of Open Access Journals (Sweden)

    Lele eXu

    2014-10-01

    Full Text Available The Independent Component Analysis - linear non-Gaussian acyclic model (LiNGAM, an algorithm that can be used to estimate the causal relationship among non-Gaussian distributed data, has the potential value to detect the effective connectivity of human brain areas. Under the assumptions that (a: the data generating process is linear, (b there are no unobserved confounders, and (c data have non-Gaussian distributions, LiNGAM can be used to discover the complete causal structure of data. Previous studies reveal that the algorithm could perform well when the data points being analyzed is relatively long. However, there are too few data points in most neuroimaging recordings, especially functional magnetic resonance imaging (fMRI, to allow the algorithm to converge. Smith’s study speculates a method by pooling data points across subjects may be useful to address this issue (Smith et al., 2011. Thus this study focus on validating Smith’s proposal of pooling data points across subjects for the use of LiNGAM, and this method is named as pooling-LiNGAM (pLiNGAM. Using both simulated and real fMRI data, our current study demonstrates the feasibility and efficiency of the pLiNGAM on the effective connectivity estimation.

  12. Geometry of perturbed Gaussian states and quantum estimation

    International Nuclear Information System (INIS)

    Genoni, Marco G; Giorda, Paolo; Paris, Matteo G A

    2011-01-01

    We address the non-Gaussianity (nG) of states obtained by weakly perturbing a Gaussian state and investigate the relationships with quantum estimation. For classical perturbations, i.e. perturbations to eigenvalues, we found that the nG of the perturbed state may be written as the quantum Fisher information (QFI) distance minus a term depending on the infinitesimal energy change, i.e. it provides a lower bound to statistical distinguishability. Upon moving on isoenergetic surfaces in a neighbourhood of a Gaussian state, nG thus coincides with a proper distance in the Hilbert space and exactly quantifies the statistical distinguishability of the perturbations. On the other hand, for perturbations leaving the covariance matrix unperturbed, we show that nG provides an upper bound to the QFI. Our results show that the geometry of non-Gaussian states in the neighbourhood of a Gaussian state is definitely not trivial and cannot be subsumed by a differential structure. Nevertheless, the analysis of perturbations to a Gaussian state reveals that nG may be a resource for quantum estimation. The nG of specific families of perturbed Gaussian states is analysed in some detail with the aim of finding the maximally non-Gaussian state obtainable from a given Gaussian one. (fast track communication)

  13. A dispersion modelling system for urban air pollution

    Energy Technology Data Exchange (ETDEWEB)

    Karppinen, A.; Kukkonen, J.; Nordlund, G.; Rantakrans, E.; Valkama, I.

    1998-10-01

    An Urban Dispersion Modelling system UDM-FMI, developed at the Finnish Meteorological Institute is described in the report. The modelling system includes a multiple source Gaussian plume model and a meteorological pre-processing model. The dispersion model is an integrated urban scale model, taking into account of all source categories (point, line, area and volume sources). It includes a treatment of chemical transformation (for NO{sub 2}) wet and dry deposition (for SO{sub 2}) plume rise, downwash phenomena and dispersion of inert particles. The model allows also for the influence of a finite mixing height. The model structure is mainly based on the state-of-the-art methodology. The system also computes statistical parameters from the time series, which can be compared to air quality guidelines. The relevant meteorological parameters for the dispersion model are evaluated using data produced by a meteorological pre-processor. The model is based mainly on the energy budget method. Results of national investigations have been used for evaluating climate-dependent parameters. The model utilises the synoptic meteorological observations, radiation records and aerological sounding observations. The model results include the hourly time series of the relevant atmospheric turbulence 51 refs.

  14. A Tracer Experiment to Understand Dispersion Characteristics at a Nuclear Power Plant Site-Focusing on the Comparison with Predictive Results using Reg. Guide 1.145 model

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyojoon; Kim, Eunhan; Jeong, Haesun; Hwang, Wontae; Han, Moonhee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    There remains disagreement regarding the application of a Gaussian plume model in PAVAN, as it relates to the complicated geographical features of a coastal area. Therefore, this study was performed in order to figure out the characteristics of the PAVAN program that was developed based on the equations of Gaussian Plume Model, which reflected the actual measured concentration of radioactive materials released to the air. It also evaluated the appropriateness of using a Gaussian plume model for assessing the environmental impact of radiation from a nuclear power plant. In order to analyze the dispersion characteristics of radioactive materials released into the air from the Wolsong nuclear power plant, SF{sub 6} gas was released from the site at night for one hour under stable atmospheric conditions disadvantageous to dilute a tracer gas in this study. The measured concentrations were compared with theoretical estimates derived from meteorological data observed during the experiment period to evaluate the prediction capabilities of the Gaussian plume model. This study conducted a tracer dispersion experiment at the site of Wolsong Nuclear Power Plant site in Korea to analyze the atmospheric dispersion characteristics of radioactive materials. It compared the experimental value with the calculated value using the Gaussian Plume Model as suggested in Reg. 1.145, based on the meteorological data observed in the experiment time period, and evaluated the conservative estimate of the calculated value. In the area where the calculated value is relatively high, the calculated value tends to show higher than the experimental value, which confirmed the conservative manner of the estimating of the calculated value using the Gaussian Plume Model. The short-term exposure of radiation to a human body caused by a nuclear accident would be higher in the area where the atmospheric concentration of radiation is high. Therefore, it is a sufficiently conservative manner to use the

  15. A Tracer Experiment to Understand Dispersion Characteristics at a Nuclear Power Plant Site-Focusing on the Comparison with Predictive Results using Reg. Guide 1.145 model

    International Nuclear Information System (INIS)

    Jeong, Hyojoon; Kim, Eunhan; Jeong, Haesun; Hwang, Wontae; Han, Moonhee

    2014-01-01

    There remains disagreement regarding the application of a Gaussian plume model in PAVAN, as it relates to the complicated geographical features of a coastal area. Therefore, this study was performed in order to figure out the characteristics of the PAVAN program that was developed based on the equations of Gaussian Plume Model, which reflected the actual measured concentration of radioactive materials released to the air. It also evaluated the appropriateness of using a Gaussian plume model for assessing the environmental impact of radiation from a nuclear power plant. In order to analyze the dispersion characteristics of radioactive materials released into the air from the Wolsong nuclear power plant, SF 6 gas was released from the site at night for one hour under stable atmospheric conditions disadvantageous to dilute a tracer gas in this study. The measured concentrations were compared with theoretical estimates derived from meteorological data observed during the experiment period to evaluate the prediction capabilities of the Gaussian plume model. This study conducted a tracer dispersion experiment at the site of Wolsong Nuclear Power Plant site in Korea to analyze the atmospheric dispersion characteristics of radioactive materials. It compared the experimental value with the calculated value using the Gaussian Plume Model as suggested in Reg. 1.145, based on the meteorological data observed in the experiment time period, and evaluated the conservative estimate of the calculated value. In the area where the calculated value is relatively high, the calculated value tends to show higher than the experimental value, which confirmed the conservative manner of the estimating of the calculated value using the Gaussian Plume Model. The short-term exposure of radiation to a human body caused by a nuclear accident would be higher in the area where the atmospheric concentration of radiation is high. Therefore, it is a sufficiently conservative manner to use the Gaussian

  16. Spectrum sharing opportunities of full-duplex systems using improper Gaussian signaling

    KAUST Repository

    Gaafar, Mohamed

    2015-08-01

    Sharing the licensed spectrum of full-duplex (FD) primary users (PU) brings strict limitations on the underlay cognitive radio operation. Particularly, the self interference may overwhelm the PU receiver and limit the opportunity of secondary users (SU) to access the spectrum. Improper Gaussian signaling (IGS) has demonstrated its superiority in improving the performance of interference channel systems. Throughout this paper, we assume a FD PU pair that uses proper Gaussian signaling (PGS), and a half-duplex SU pair that uses IGS. The objective is to maximize the SU instantaneous achievable rate while meeting the PU quality-of-service. To this end, we propose a simplified algorithm that optimizes the SU signal parameters, i.e, the transmit power and the circularity coefficient, which is a measure of the degree of impropriety of the SU signal, to achieve the design objective. Numerical results show the merits of adopting IGS compared with PGS for the SU especially with the existence of week PU direct channels and/or strong SU interference channels.

  17. Non-Gaussianity in island cosmology

    International Nuclear Information System (INIS)

    Piao Yunsong

    2009-01-01

    In this paper we fully calculate the non-Gaussianity of primordial curvature perturbation of the island universe by using the second order perturbation equation. We find that for the spectral index n s ≅0.96, which is favored by current observations, the non-Gaussianity level f NL seen in an island will generally lie between 30 and 60, which may be tested by the coming observations. In the landscape, the island universe is one of anthropically acceptable cosmological histories. Thus the results obtained in some sense mean the coming observations, especially the measurement of non-Gaussianity, will be significant to clarify how our position in the landscape is populated.

  18. SNR Estimation in Linear Systems with Gaussian Matrices

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Alrashdi, Ayed; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2017-01-01

    This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.

  19. SNR Estimation in Linear Systems with Gaussian Matrices

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-09-27

    This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.

  20. Monogamy inequality for distributed gaussian entanglement.

    Science.gov (United States)

    Hiroshima, Tohya; Adesso, Gerardo; Illuminati, Fabrizio

    2007-02-02

    We show that for all n-mode Gaussian states of continuous variable systems, the entanglement shared among n parties exhibits the fundamental monogamy property. The monogamy inequality is proven by introducing the Gaussian tangle, an entanglement monotone under Gaussian local operations and classical communication, which is defined in terms of the squared negativity in complete analogy with the case of n-qubit systems. Our results elucidate the structure of quantum correlations in many-body harmonic lattice systems.

  1. A Local Scalable Distributed EM Algorithm for Large P2P Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  2. A Review of Algorithms for Retinal Vessel Segmentation

    Directory of Open Access Journals (Sweden)

    Monserrate Intriago Pazmiño

    2014-10-01

    Full Text Available This paper presents a review of algorithms for extracting blood vessels network from retinal images. Since retina is a complex and delicate ocular structure, a huge effort in computer vision is devoted to study blood vessels network for helping the diagnosis of pathologies like diabetic retinopathy, hypertension retinopathy, retinopathy of prematurity or glaucoma. To carry out this process many works for normal and abnormal images have been proposed recently. These methods include combinations of algorithms like Gaussian and Gabor filters, histogram equalization, clustering, binarization, motion contrast, matched filters, combined corner/edge detectors, multi-scale line operators, neural networks, ants, genetic algorithms, morphological operators. To apply these algorithms pre-processing tasks are needed. Most of these algorithms have been tested on publicly retinal databases. We have include a table summarizing algorithms and results of their assessment.

  3. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  4. Three-dimensional propagation and absorption of high frequency Gaussian beams in magnetoactive plasmas

    International Nuclear Information System (INIS)

    Nowak, S.; Orefice, A.

    1994-01-01

    In today's high frequency systems employed for plasma diagnostics, power heating, and current drive the behavior of the wave beams is appreciably affected by the self-diffraction phenomena due to their narrow collimation. In the present article the three-dimensional propagation of Gaussian beams in inhomogeneous and anisotropic media is analyzed, starting from a properly formulated dispersion relation. Particular attention is paid, in the case of electromagnetic electron cyclotron (EC) waves, to the toroidal geometry characterizing tokamak plasmas, to the power density evolution on the advancing wave fronts, and to the absorption features occurring when a beam crosses an EC resonant layer

  5. Atmospheric dispersion of radionuclides released by a nuclear plant

    International Nuclear Information System (INIS)

    Barboza, A.A.

    1989-01-01

    A numerical model has been developed to simulate the atmospheric dispersion of radionuclides released by a nuclear plant operating under normal conditions. The model, based on gaussian plume representation, accouts for and evaluates several factors which affect the concentraction of effluents in the atmosphere, such as: ressuspension, deposition, radioactive decay, energy and type of the radiation emitted, among others. The concentraction of effluents in the atmosphere is calculated for a uniform mesh of points around the plant, allowing the equivalent doses to be then evaluated. Simulations of the atmosphere dispersion of radioactive plumes of Cs 137 and Ar 41 have been performed assuming a constant rate of release, as expected from the normal operation of a nuclear plant. Finally, this work analyzes the equivalent doses at ground level due to the dispersion of Cs 137 and Ar 41 , accumulated over one year and determines the isodose curves for a hypothetical site. (author) [pt

  6. Improvement and implementation for Canny edge detection algorithm

    Science.gov (United States)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  7. Non-Gaussianity from inflation: theory and observations

    Science.gov (United States)

    Bartolo, N.; Komatsu, E.; Matarrese, S.; Riotto, A.

    2004-11-01

    This is a review of models of inflation and of their predictions for the primordial non-Gaussianity in the density perturbations which are thought to be at the origin of structures in the Universe. Non-Gaussianity emerges as a key observable to discriminate among competing scenarios for the generation of cosmological perturbations and is one of the primary targets of present and future Cosmic Microwave Background satellite missions. We give a detailed presentation of the state-of-the-art of the subject of non-Gaussianity, both from the theoretical and the observational point of view, and provide all the tools necessary to compute at second order in perturbation theory the level of non-Gaussianity in any model of cosmological perturbations. We discuss the new wave of models of inflation, which are firmly rooted in modern particle physics theory and predict a significant amount of non-Gaussianity. The review is addressed to both astrophysicists and particle physicists and contains useful tables which summarize the theoretical and observational results regarding non-Gaussianity.

  8. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Gonzalo Bailador del Pozo

    2011-11-01

    Full Text Available This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC and Normalized Cuts (NCuts. The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  9. Non-gaussianity versus nonlinearity of cosmological perturbations.

    Science.gov (United States)

    Verde, L

    2001-06-01

    Following the discovery of the cosmic microwave background, the hot big-bang model has become the standard cosmological model. In this theory, small primordial fluctuations are subsequently amplified by gravity to form the large-scale structure seen today. Different theories for unified models of particle physics, lead to different predictions for the statistical properties of the primordial fluctuations, that can be divided in two classes: gaussian and non-gaussian. Convincing evidence against or for gaussian initial conditions would rule out many scenarios and point us toward a physical theory for the origin of structures. The statistical distribution of cosmological perturbations, as we observe them, can deviate from the gaussian distribution in several different ways. Even if perturbations start off gaussian, nonlinear gravitational evolution can introduce non-gaussian features. Additionally, our knowledge of the Universe comes principally from the study of luminous material such as galaxies, but galaxies might not be faithful tracers of the underlying mass distribution. The relationship between fluctuations in the mass and in the galaxies distribution (bias), is often assumed to be local, but could well be nonlinear. Moreover, galaxy catalogues use the redshift as third spatial coordinate: the resulting redshift-space map of the galaxy distribution is nonlinearly distorted by peculiar velocities. Nonlinear gravitational evolution, biasing, and redshift-space distortion introduce non-gaussianity, even in an initially gaussian fluctuation field. I investigate the statistical tools that allow us, in principle, to disentangle the above different effects, and the observational datasets we require to do so in practice.

  10. Galaxy bias and primordial non-Gaussianity

    Energy Technology Data Exchange (ETDEWEB)

    Assassi, Valentin; Baumann, Daniel [DAMTP, Cambridge University, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Schmidt, Fabian, E-mail: assassi@ias.edu, E-mail: D.D.Baumann@uva.nl, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching (Germany)

    2015-12-01

    We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin of any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation.

  11. Optimal cloning of mixed Gaussian states

    International Nuclear Information System (INIS)

    Guta, Madalin; Matsumoto, Keiji

    2006-01-01

    We construct the optimal one to two cloning transformation for the family of displaced thermal equilibrium states of a harmonic oscillator, with a fixed and known temperature. The transformation is Gaussian and it is optimal with respect to the figure of merit based on the joint output state and norm distance. The proof of the result is based on the equivalence between the optimal cloning problem and that of optimal amplification of Gaussian states which is then reduced to an optimization problem for diagonal states of a quantum oscillator. A key concept in finding the optimum is that of stochastic ordering which plays a similar role in the purely classical problem of Gaussian cloning. The result is then extended to the case of n to m cloning of mixed Gaussian states

  12. Galaxy bias and primordial non-Gaussianity

    International Nuclear Information System (INIS)

    Assassi, Valentin; Baumann, Daniel; Schmidt, Fabian

    2015-01-01

    We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin of any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation

  13. Spectral phase shift and residual angular dispersion of an accousto-optic programme dispersive filter

    International Nuclear Information System (INIS)

    Boerzsoenyi, A.; Meroe, M.

    2010-01-01

    Complete text of publication follows. There is an increasing demand for active and precise dispersion control of ultrashort laser pulses. In chirped pulse amplification (CPA) laser systems, the dispersion of the optical elements of the laser has to be compensated at least to the fourth order to obtain high temporal contrast compressed pulses. Nowadays the most convenient device for active and programmable control of spectral phase and amplitude of broadband laser pulses is the acousto-optic programmable dispersive filter (AOPDF), claimed to be able to adjust the spectral phase up to the fourth order. Although it has been widely used, surprisingly enough there has been only a single, low resolution measurement reported on the accuracy of the induced spectral phase shift of the device. In our paper we report on the first systematic experiment aiming at the precise characterization of an AOPDF device. In the experiment the spectral phase shift of the AOPDF device was measured by spectrally and spatially resolved interferometry, which is especially powerful tool to determine small dispersion values with high accuracy. Besides the spectral phase dispersion, we measured both the propagation direction angular dispersion (PDAD) and the phase front angular dispersion (PhFAD). Although the two quantities are equal for plane waves, there may be noticeable difference for Gaussian pulses. PDAD was determined simply by focusing the beam on the slit of an imaging spectrograph, while PhFAD was measured by the use of an inverted Mach-Zehnder interferometer and an imaging spectrograph. In the measurements, the spectral phase shift and both types of angular dispersion have been recorded upon the systematic change of all the accessible functions of the acousto-optic programmable dispersive filter. The measured values of group delay dispersion (GDD) and third order dispersion (TOD) have been found to agree with the preset values within the error of the measurement (1 fs 2 and 10 fs 3

  14. Generating Correlated QPSK Waveforms By Exploiting Real Gaussian Random Variables

    KAUST Repository

    Jardak, Seifallah

    2012-11-01

    The design of waveforms with specified auto- and cross-correlation properties has a number of applications in multiple-input multiple-output (MIMO) radar, one of them is the desired transmit beampattern design. In this work, an algorithm is proposed to generate quadrature phase shift- keying (QPSK) waveforms with required cross-correlation properties using real Gaussian random-variables (RV’s). This work can be considered as the extension of what was presented in [1] to generate BPSK waveforms. This work will be extended for the generation of correlated higher-order phase shift-keying (PSK) and quadrature amplitude modulation (QAM) schemes that can better approximate the desired beampattern.

  15. Generating Correlated QPSK Waveforms By Exploiting Real Gaussian Random Variables

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2012-01-01

    The design of waveforms with specified auto- and cross-correlation properties has a number of applications in multiple-input multiple-output (MIMO) radar, one of them is the desired transmit beampattern design. In this work, an algorithm is proposed to generate quadrature phase shift- keying (QPSK) waveforms with required cross-correlation properties using real Gaussian random-variables (RV’s). This work can be considered as the extension of what was presented in [1] to generate BPSK waveforms. This work will be extended for the generation of correlated higher-order phase shift-keying (PSK) and quadrature amplitude modulation (QAM) schemes that can better approximate the desired beampattern.

  16. Laguerre Gaussian beam multiplexing through turbulence

    CSIR Research Space (South Africa)

    Trichili, A

    2014-08-17

    Full Text Available We analyze the effect of atmospheric turbulence on the propagation of multiplexed Laguerre Gaussian modes. We present a method to multiplex Laguerre Gaussian modes using digital holograms and decompose the resulting field after encountering a...

  17. 1-D profiling using highly dispersive guided waves

    Science.gov (United States)

    Volker, Arno; van Zon, Tim

    2014-02-01

    Corrosion is one of the industries major issues regarding the integrity of assets. Currently, inspections are conducted at regular intervals to ensure a sufficient integrity level of these assets. Cost reduction while maintaining a high level of reliability and safety of installations is a major challenge. There are many situations where the actual defect location is not accessible, e.g., a pipe support or a partially buried pipe. Guided wave tomography has been developed to reconstruct the wall thickness of steel pipes. In case of bottom of the line corrosion, i.e., a single corrosion pit, a simpler approach may be followed. Data is collected in a pitch-catch configuration at the 12 o'clock position using highly dispersive guided waves. After dispersion correction the data collapses to a short pulse, any residual dispersion indicates wall loss. The phase spectrum is used to invert for the wall thickness profile in the circumferential direction, assuming a Gaussian defect profile. The approach is evaluated on numerically simulated and on measured data. The method is intended for rapid, semi-quantitative screening of pipes.

  18. 1-D profiling using highly dispersive guided waves

    International Nuclear Information System (INIS)

    Volker, Arno; Zon, Tim van

    2014-01-01

    Corrosion is one of the industries major issues regarding the integrity of assets. Currently, inspections are conducted at regular intervals to ensure a sufficient integrity level of these assets. Cost reduction while maintaining a high level of reliability and safety of installations is a major challenge. There are many situations where the actual defect location is not accessible, e.g., a pipe support or a partially buried pipe. Guided wave tomography has been developed to reconstruct the wall thickness of steel pipes. In case of bottom of the line corrosion, i.e., a single corrosion pit, a simpler approach may be followed. Data is collected in a pitch-catch configuration at the 12 o'clock position using highly dispersive guided waves. After dispersion correction the data collapses to a short pulse, any residual dispersion indicates wall loss. The phase spectrum is used to invert for the wall thickness profile in the circumferential direction, assuming a Gaussian defect profile. The approach is evaluated on numerically simulated and on measured data. The method is intended for rapid, semi-quantitative screening of pipes

  19. On signal design by the R/0/ criterion for non-white Gaussian noise channels

    Science.gov (United States)

    Bordelon, D. L.

    1977-01-01

    The use of the cut-off rate criterion for modulation system design is investigated for channels with non-white Gaussian noise. A signal space representation of the waveform channel is developed, and the cut-off rate for vector channels with additive non-white Gaussian noise and unquantized demodulation is derived. When the signal input to the channel is a continuous random vector, maximization of the cut-off rate with constrained average signal energy leads to a water-filling interpretation of optimal energy distribution in signal space. The necessary condition for a finite signal set to maximize the cut-off rate with constrained energy and an equally likely probability assignment of signal vectors is presented, and an algorithm is outlined for numerically computing the optimum signal set. As an example, the rectangular signal set which has the water-filling average energy distribution and the optimum rectangular set are compared.

  20. Phase statistics in non-Gaussian scattering

    International Nuclear Information System (INIS)

    Watson, Stephen M; Jakeman, Eric; Ridley, Kevin D

    2006-01-01

    Amplitude weighting can improve the accuracy of frequency measurements in signals corrupted by multiplicative speckle noise. When the speckle field constitutes a circular complex Gaussian process, the optimal function of amplitude weighting is provided by the field intensity, corresponding to the intensity-weighted phase derivative statistic. In this paper, we investigate the phase derivative and intensity-weighted phase derivative returned from a two-dimensional random walk, which constitutes a generic scattering model capable of producing both Gaussian and non-Gaussian fluctuations. Analytical results are developed for the correlation properties of the intensity-weighted phase derivative, as well as limiting probability densities of the scattered field. Numerical simulation is used to generate further probability densities and determine optimal weighting criteria from non-Gaussian fields. The results are relevant to frequency retrieval in radiation scattered from random media

  1. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  2. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  3. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  4. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  5. Spiral phase plates for the generation of high-order Laguerre-Gaussian beams with non-zero radial index

    Science.gov (United States)

    Ruffato, G.; Carli, M.; Massari, M.; Romanato, F.

    2015-03-01

    The work of design, fabrication and characterization of spiral phase plates for the generation of Laguerre-Gaussian (LG) beams with non-null radial index is presented. Samples were fabricated by electron beam lithography on polymethylmethacrylate layers over glass substrates. The optical response of these phase optical elements was measured and the purity of the experimental beams was investigated in terms of Laguerre-Gaussian modes contributions. The farfield intensity pattern was compared with theoretical models and numerical simulations, while the expected phase features were confirmed by interferometric analyses. The high quality of the output beams confirms the applicability of these phase plates for the generation of high-order Laguerre-Gaussian beams. A novel application consisting in the design of computer-generated holograms encoding information for light beams carrying phase singularities is shown. A numerical code based on iterative Fourier transform algorithm has been developed for the computation of the phase pattern of phase-only diffractive optical element for illumination under LG beams. Numerical analysis and preliminary experimental results confirm the applicability of these devices as high-security optical elements.

  6. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    Science.gov (United States)

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  7. Validation and comparison of dispersion models of RTARC DSS

    International Nuclear Information System (INIS)

    Duran, J.; Pospisil, M.

    2004-01-01

    RTARC DSS (Real Time Accident Release Consequences - Decision Support System) is a computer code developed at the VUJE Trnava, Inc. (Stubna, M. et al, 1993). The code calculations include atmospheric transport and diffusion, dose assessment, evaluation and displaying of the affected zones, evaluation of the early health effects, concentration and dose rate time dependence in the selected sites etc. The simulation of the protective measures (sheltering, iodine administration) is involved. The aim of this paper is to present the process of validation of the RTARC dispersion models. RTARC includes models for calculations of release for very short (Method Monte Carlo - MEMOC), short (Gaussian Straight-Line Model) and long distances (Puff Trajectory Model - PTM). Validation of the code RTARC was performed using the results of comparisons and experiments summarized in the Table 1.: 1. Experiments and comparisons in the process of validation of the system RTARC - experiments or comparison - distance - model. Wind tunnel experiments (Universitaet der Bundeswehr, Muenchen) - Area of NPP - Method Monte Carlo. INEL (Idaho National Engineering Laboratory) - short/medium - Gaussian model and multi tracer atmospheric experiment - distances - PTM. Model Validation Kit - short distances - Gaussian model. STEP II.b 'Realistic Case Studies' - long distances - PTM. ENSEMBLE comparison - long distances - PTM (orig.)

  8. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  9. Analysis of Distributed Consensus Time Synchronization with Gaussian Delay over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xiong Gang

    2009-01-01

    Full Text Available This paper presents theoretical results on the convergence of the distributed consensus timing synchronization (DCTS algorithm for wireless sensor networks assuming general Gaussian delay between nodes. The asymptotic expectation and mean square of the global synchronization error are computed. The results lead to the definition of a time delay balanced network in which average timing consensus between nodes can be achieved despite random delays. Several structured network architectures are studied as examples, and their associated simulation results are used to validate analytical findings.

  10. Linking network usage patterns to traffic Gaussianity fit

    NARCIS (Netherlands)

    de Oliveira Schmidt, R.; Sadre, R.; Melnikov, Nikolay; Schönwälder, Jürgen; Pras, Aiko

    Gaussian traffic models are widely used in the domain of network traffic modeling. The central assumption is that traffic aggregates are Gaussian distributed. Due to its importance, the Gaussian character of network traffic has been extensively assessed by researchers in the past years. In 2001,

  11. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  12. Stochastic differential calculus for Gaussian and non-Gaussian noises: A critical review

    Science.gov (United States)

    Falsone, G.

    2018-03-01

    In this paper a review of the literature works devoted to the study of stochastic differential equations (SDEs) subjected to Gaussian and non-Gaussian white noises and to fractional Brownian noises is given. In these cases, particular attention must be paid in treating the SDEs because the classical rules of the differential calculus, as the Newton-Leibnitz one, cannot be applied or are applicable with many difficulties. Here all the principal approaches solving the SDEs are reported for any kind of noise, highlighting the negative and positive properties of each one and making the comparisons, where it is possible.

  13. Monte Carlo simulation and gaussian broaden techniques for full energy peak of characteristic X-ray in EDXRF

    International Nuclear Information System (INIS)

    Li Zhe; Liu Min; Shi Rui; Wu Xuemei; Tuo Xianguo

    2012-01-01

    Background: Non-standard analysis (NSA) technique is one of the most important development directions of energy dispersive X-ray fluorescence (EDXRF). Purpose: This NSA technique is mainly based on Monte Carlo (MC) simulation and full energy peak broadening, which were studied preliminarily in this paper. Methods: A kind of MC model was established for Si-PIN based EDXRF setup, and the flux spectra were obtained for iron ore sample. Finally, the flux spectra were broadened by Gaussian broaden parameters calculated by a new method proposed in this paper, and the broadened spectra were compared with measured energy spectra. Results: MC method can be used to simulate EDXRF measurement, and can correct the matrix effects among elements automatically. Peak intensities can be obtained accurately by using the proposed Gaussian broaden technique. Conclusions: This study provided a key technique for EDXRF to achieve advanced NSA technology. (authors)

  14. CFHTLenS: a Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics

    Science.gov (United States)

    Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.

    2015-05-01

    We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.

  15. Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model

    Directory of Open Access Journals (Sweden)

    Joakim eda Silva

    2015-12-01

    Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  16. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  17. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  18. Loop corrections to primordial non-Gaussianity

    Science.gov (United States)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  19. Phase space structure of generalized Gaussian cat states

    International Nuclear Information System (INIS)

    Nicacio, Fernando; Maia, Raphael N.P.; Toscano, Fabricio; Vallejos, Raul O.

    2010-01-01

    We analyze generalized Gaussian cat states obtained by superposing arbitrary Gaussian states. The structure of the interference term of the Wigner function is always hyperbolic, surviving the action of a thermal reservoir. We also consider certain superpositions of mixed Gaussian states. An application to semiclassical dynamics is discussed.

  20. Prediction and retrodiction with continuously monitored Gaussian states

    DEFF Research Database (Denmark)

    Zhang, Jinglei; Mølmer, Klaus

    2017-01-01

    Gaussian states of quantum oscillators are fully characterized by the mean values and the covariance matrix of their quadrature observables. We consider the dynamics of a system of oscillators subject to interactions, damping, and continuous probing which maintain their Gaussian state property......(t)$ to Gaussian states implies that the matrix $E(t)$ is also fully characterized by a vector of mean values and a covariance matrix. We derive the dynamical equations for these quantities and we illustrate their use in the retrodiction of measurements on Gaussian systems....

  1. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  2. Magneto-Optic Fiber Gratings Useful for Dynamic Dispersion Management and Tunable Comb Filtering

    International Nuclear Information System (INIS)

    Bao-Jian, Wu; Xin, Lu; Kun, Qiu

    2010-01-01

    Intelligent control of dispersion management and tunable comb filtering in optical network applications can be performed by using magneto-optic fiber Bragg gratings (MFBGs). When a nonuniform magnetic field is applied to the MFBG with a constant grating period, the resulting grating response is equivalent to that of a conventional chirped grating. Under a linearly nonuniform magnetic field along the grating, a linear dispersion is achieved in the grating bandgap and the maximal dispersion slope can come to 1260 ps/nm 2 for a 10-mm-long fiber grating at 1550 nm window. Similarly, a Gaussian-apodizing sampled MFBG is also useful for magnetically tunable comb filtering, with potential application to clock recovery from return-to-zero optical signals and optical carrier tracking. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  3. An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor.

    Science.gov (United States)

    Xu, He; Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu

    2017-08-05

    The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K -Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.

  4. Model Equation for Acoustic Nonlinear Measurement of Dispersive Specimens at High Frequency

    Science.gov (United States)

    Zhang, Dong; Kushibiki, Junichi; Zou, Wei

    2006-10-01

    We present a theoretical model for acoustic nonlinearity measurement of dispersive specimens at high frequency. The nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation governs the nonlinear propagation in the SiO2/specimen/SiO2 multi-layer medium. The dispersion effect is considered in a special manner by introducing the frequency-dependant sound velocity in the KZK equation. Simple analytic solutions are derived by applying the superposition technique of Gaussian beams. The solutions are used to correct the diffraction and dispersion effects in the measurement of acoustic nonlinearity of cottonseed oil in the frequency range of 33-96 MHz. Regarding two different ultrasonic devices, the accuracies of the measurements are improved to ±2.0% and ±1.3% in comparison with ±9.8% and ±2.9% obtained from the previous plane wave model.

  5. Improving the modelling of redshift-space distortions - I. A bivariate Gaussian description for the galaxy pairwise velocity distributions

    Science.gov (United States)

    Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi

    2015-01-01

    As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation r, such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and dispersion σ. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distortions on all scales, fully capturing the overall linear and non-linear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of RSD is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. We also show how this description naturally allows for the Taylor expansion of 1 + ξS(s) around 1 + ξR(r), which leads to the Kaiser linear formula when truncated to second order, explicating its connection with the moments of the velocity distribution functions. More work is needed, but these results indicate a very promising path to make definitive progress in our programme to improve RSD estimators.

  6. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    Ibn-Elhaj E

    2009-01-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  7. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    E. M. Ismaili Aalaoui

    2009-02-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  8. Frozen Gaussian approximation for 3D seismic tomography

    Science.gov (United States)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  9. Super-resolving random-Gaussian apodized photon sieve.

    Science.gov (United States)

    Sabatyan, Arash; Roshaninejad, Parisa

    2012-09-10

    A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.

  10. The description of compton lines in energy-dispersive x-ray Fluorescence

    International Nuclear Information System (INIS)

    Van Gysel, Mon; Van Espen, P.J.M.

    2001-01-01

    Energy-Dispersive X-Ray Fluorescence (ED-XRF) is a non-destructive technique for the element analysis in a concentration range ppm - % making use of X rays up to 100 keV. Generally, two photon matter interactions occur, respectively absorption and scattering. The absorption of incident photons gives raise to characteristic lines. Scattering gives an incoherent and a coherent line. A Gaussian peak model is adequate to describe the characteristic and coherent scattered lines. Incoherent lines appear as non-Gaussian, broadened peaks. The profile of a Compton peak is complex. It depends on the geometry and the composition of the sample. Especially, when analyzing a low Z matrix; dominant scattering and multiple scattering may cause large interferences. The absence of an appropriate fitting model makes the Compton profile seen as a limiting factor in the evaluation of spectra. An accurate description of incoherent lines should improve quantitative analysis. Therefore, a suitable fitting model, making use of the expertise of non-linear least squares procedures and Monte-Carlo calculations was systematically investigated. The proposed model, containing a modified Gaussian, is tested on experimental data recorded with a HPGe detector

  11. Wavefront-ray grid FDTD algorithm

    OpenAIRE

    ÇİYDEM, MEHMET

    2016-01-01

    A finite difference time domain algorithm on a wavefront-ray grid (WRG-FDTD) is proposed in this study to reduce numerical dispersion of conventional FDTD methods. A FDTD algorithm conforming to a wavefront-ray grid can be useful to take into account anisotropy effects of numerical grids since it features directional energy flow along the rays. An explicit and second-order accurate WRG-FDTD algorithm is provided in generalized curvilinear coordinates for an inhomogeneous isotropic medium. Num...

  12. Non-Gaussian Systems Control Performance Assessment Based on Rational Entropy

    Directory of Open Access Journals (Sweden)

    Jinglin Zhou

    2018-05-01

    Full Text Available Control loop Performance Assessment (CPA plays an important role in system operations. Stochastic statistical CPA index, such as a minimum variance controller (MVC-based CPA index, is one of the most widely used CPA indices. In this paper, a new minimum entropy controller (MEC-based CPA method of linear non-Gaussian systems is proposed. In this method, probability density function (PDF and rational entropy (RE are respectively used to describe the characteristics and the uncertainty of random variables. To better estimate the performance benchmark, an improved EDA algorithm, which is used to estimate the system parameters and noise PDF, is given. The effectiveness of the proposed method is illustrated through case studies on an ARMAX system.

  13. Some continual integrals from gaussian forms

    International Nuclear Information System (INIS)

    Mazmanishvili, A.S.

    1985-01-01

    The result summary of continual integration of gaussian functional type is given. The summary contains 124 continual integrals which are the mathematical expectation of the corresponding gaussian form by the continuum of random trajectories of four types: real-valued Ornstein-Uhlenbeck process, Wiener process, complex-valued Ornstein-Uhlenbeck process and the stochastic harmonic one. The summary includes both the known continual integrals and the unpublished before integrals. Mathematical results of the continual integration carried in the work may be applied in the problem of the theory of stochastic process, approaching to the finding of mean from gaussian forms by measures generated by the pointed stochastic processes

  14. Current inversion induced by colored non-Gaussian noise

    International Nuclear Information System (INIS)

    Bag, Bidhan Chandra; Hu, Chin-Kung

    2009-01-01

    We study a stochastic process driven by colored non-Gaussian noises. For the flashing ratchet model we find that there is a current inversion in the variation of the current with the half-cycle period which accounts for the potential on–off operation. The current inversion almost disappears if one switches from non-Gaussian (NG) to Gaussian (G) noise. We also find that at low value of the asymmetry parameter of the potential the mobility controlled current is more negative for NG noise as compared to G noise. But at large magnitude of the parameter the diffusion controlled positive current is higher for the former than for the latter. On increasing the noise correlation time (τ), keeping the noise strength fixed, the mean velocity of a particle first increases and then decreases after passing through a maximum if the noise is non-Gaussian. For Gaussian noise, the current monotonically decreases. The current increases with the noise parameter p, 0< p<5/3, which is 1 for Gaussian noise

  15. Passivity and practical work extraction using Gaussian operations

    International Nuclear Information System (INIS)

    Brown, Eric G; Huber, Marcus; Friis, Nicolai

    2016-01-01

    Quantum states that can yield work in a cyclical Hamiltonian process form one of the primary resources in the context of quantum thermodynamics. Conversely, states whose average energy cannot be lowered by unitary transformations are called passive. However, while work may be extracted from non-passive states using arbitrary unitaries, the latter may be hard to realize in practice. It is therefore pertinent to consider the passivity of states under restricted classes of operations that can be feasibly implemented. Here, we ask how restrictive the class of Gaussian unitaries is for the task of work extraction. We investigate the notion of Gaussian passivity, that is, we present necessary and sufficient criteria identifying all states whose energy cannot be lowered by Gaussian unitaries. For all other states we give a prescription for the Gaussian operations that extract the maximal amount of energy. Finally, we show that the gap between passivity and Gaussian passivity is maximal, i.e., Gaussian-passive states may still have a maximal amount of energy that is extractable by arbitrary unitaries, even under entropy constraints. (paper)

  16. The use of the multi-cumulant tensor analysis for the algorithmic optimisation of investment portfolios

    Science.gov (United States)

    Domino, Krzysztof

    2017-02-01

    The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.

  17. Tachyon mediated non-Gaussianity

    International Nuclear Information System (INIS)

    Dutta, Bhaskar; Leblond, Louis; Kumar, Jason

    2008-01-01

    We describe a general scenario where primordial non-Gaussian curvature perturbations are generated in models with extra scalar fields. The extra scalars communicate to the inflaton sector mainly through the tachyonic (waterfall) field condensing at the end of hybrid inflation. These models can yield significant non-Gaussianity of the local shape, and both signs of the bispectrum can be obtained. These models have cosmic strings and a nearly flat power spectrum, which together have been recently shown to be a good fit to WMAP data. We illustrate with a model of inflation inspired from intersecting brane models.

  18. Large deviations for Gaussian processes in Hoelder norm

    International Nuclear Information System (INIS)

    Fatalov, V R

    2003-01-01

    Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields

  19. Particle rejuvenation of Rao-Blackwellized sequential Monte Carlo smoothers for conditionally linear and Gaussian models

    Science.gov (United States)

    Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric

    2017-12-01

    This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.

  20. ORACLS: A system for linear-quadratic-Gaussian control law design

    Science.gov (United States)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  1. Increasing Entanglement between Gaussian States by Coherent Photon Subtraction

    DEFF Research Database (Denmark)

    Ourjoumtsev, Alexei; Dantan, Aurelien Romain; Tualle Brouri, Rosa

    2007-01-01

    We experimentally demonstrate that the entanglement between Gaussian entangled states can be increased by non-Gaussian operations. Coherent subtraction of single photons from Gaussian quadrature-entangled light pulses, created by a nondegenerate parametric amplifier, produces delocalized states...

  2. Dispersion modeling in assessing air quality of industrial projects under Indian regulatory regime

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Amitava [Department of Chemical Engineering, University of Calcutta, 92, A.P.C.Road, Kolkata 700 009 (India)

    2010-07-01

    Environmental impact assessment (EIA) studies conducted over the years as a part of obtaining environmental clearance in accordance with Indian regulation have been given significant attention towards carrying out Gaussian dispersion modeling for predicting the ground level concentration (GLC) of pollutants, especially for SO{sub 2}. Making any adhoc decision towards recommending flue gas desulfurization (FGD) system in Indian fossil fuel combustion operations is not realistic considering the usage of fuel with low sulfur content. Thus a predictive modeling is imperative prior to making any conclusive decision. In the light of this finding, dispersion modeling has been accorded in Indian environmental regulations. This article aims at providing approaches to ascertain pollution potential for proposed power plant operation either alone or in presence of other industrial operations under different conditions. In order to assess the performance of the computational work four different cases were analyzed based on worst scenario. Results obtained through predictions were compared with National Ambient Air Quality Standards (NAAQS) of India. One specific case found to overshoot the ambient air quality adversely in respect of SO2 and was therefore, suggested to install a FGD system with at least 80 % SO2 removal efficiency. With this recommendation, the cumulative prediction yielded a very conservative resultant value of 24 hourly maximum GLC of SO2 as against a value that exceeded well above the stipulated value without considering the FGD system. The computational algorithm developed can therefore, be gainfully utilized for the purpose of EIA analysis in Indian condition.

  3. Genetic Algorithmic Optimization of PHB Production by a Mixed Culture in an Optimally Dispersed Fed-batch Bioreactor

    Directory of Open Access Journals (Sweden)

    Pratap R. Patnaik

    2009-10-01

    Full Text Available Poly-β-hydroxybutyrate (PHB is an energy-storage polymer whose properties are similar to those of chemical polymers such as polyethylene and polypropylene. Moreover, PHB is biodegradable, absorbed by human tissues and less energy-consuming than synthetic polymers. Although Ralstonia eutropha is widely used to synthesize PHB, it is inefficient in utilizing glucose and similar sugars. Therefore a co-culture of R. eutropha and Lactobacillus delbrueckii is preferred since the latter can convert glucose to lactate, which R. eutropha can metabolize easily. Tohyama et al. [24] maximized PHB production in a well-mixed fed-batch bioreactor with glucose and (NH42SO4 as the primary substrates. Since production-scale bioreactors often deviate from ideal laboratory-scale reactors, a large bioreactor was simulated by means of a dispersion model with the kinetics determined by Tohyama et al. [24] and dispersion set at an optimum Peclet number of 20 [32]. The time-dependent feed rates of the two substrates were determined through a genetic algorithm (GA to maximize PHB production. This bioreactor produced 22.2% more PHB per liter and 12.8% more cell mass than achieved by Tohyama et al. [24]. These results, and similar observations with other fermentations, indicate the feasibility of enhancing the efficiency of large nonideal bioreactors through GA optimizations.

  4. New gaussian points for the solution of first order ordinary ...

    African Journals Online (AJOL)

    Numerical experiments carried out using the new Gaussian points revealed there efficiency on stiff differential equations. The results also reveal that methods using the new Gaussian points are more accurate than those using the standard Gaussian points on non-stiff initial value problems. Keywords: Gaussian points ...

  5. Quantitative analysis with energy dispersive X-ray fluorescence analyser

    International Nuclear Information System (INIS)

    Kataria, S.K.; Kapoor, S.S.; Lal, M.; Rao, B.V.N.

    1977-01-01

    Quantitative analysis of samples using radioisotope excited energy dispersive x-ray fluorescence system is described. The complete set-up is built around a locally made Si(Li) detector x-ray spectrometer with an energy resolution of 220 eV at 5.94 KeV. The photopeaks observed in the x-ray fluorescence spectra are fitted with a Gaussian function and the intensities of the characteristic x-ray lines are extracted, which in turn are used for calculating the elemental concentrations. The results for a few typical cases are presented. (author)

  6. Experimental demonstration of adaptive digital monitoring and compensation of chromatic dispersion for coherent DP-QPSK receiver

    DEFF Research Database (Denmark)

    Borkowski, Robert; Zhang, Xu; Zibar, Darko

    2011-01-01

    We experimentally demonstrate a digital signal processing (DSP)-based optical performance monitoring (OPM) algorithm for inservice monitoring of chromatic dispersion (CD) in coherent transport networks. Dispersion accumulated in 40 Gbit/s QPSK signal after 80 km of fiber transmission is successfu...... drives an adaptive digital CD equalizer. © 2011 Optical Society of America.......We experimentally demonstrate a digital signal processing (DSP)-based optical performance monitoring (OPM) algorithm for inservice monitoring of chromatic dispersion (CD) in coherent transport networks. Dispersion accumulated in 40 Gbit/s QPSK signal after 80 km of fiber transmission...

  7. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    Science.gov (United States)

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  8. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    Science.gov (United States)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  9. A novel ultrawideband FDTD numerical modeling of ground penetrating radar on arbitrary dispersive soils

    NARCIS (Netherlands)

    Mescia, L.; Bia, P.; Caratelli, D.

    2017-01-01

    A novel two-dimensional (2-D) finite-difference timedomain algorithm for modeling ultrawideband pulse propagation in arbitrary dispersive soils is presented. The soil dispersion is modeled by general power law series representation, accounting for multiple higher order dispersive relaxation

  10. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  11. Topography and its effects on atmospheric dispersion in a risk study for nuclear facilities

    International Nuclear Information System (INIS)

    Wittek, P.

    1985-07-01

    In the consequence assessment model, applied in the German Reactor Risk Study (GRRS), atmospheric dispersion of radioactive substances is beeing treated with a straight line Gaussian dispersion model. But some of the German nuclear power plants are located in complex terrain. In this report, the 19 sites which are considered in the GRRS, are described and classified by two different methods in respect to terrain complexity. The relevant effects of the terrain on the dispersion are commented. Two modifications of the GRRS consequence assessment code UFOMOD take into account in a simple way the terrain elevation and the enhanced turbulence effected eventually by the terrain structure. Sample calculations for two release categories of the GRRS demonstrate the effect of these modifications on the calculated number of early fatalities. (orig.) [de

  12. Gaussian entanglement distribution via satellite

    Science.gov (United States)

    Hosseinidehaj, Nedasadat; Malaney, Robert

    2015-02-01

    In this work we analyze three quantum communication schemes for the generation of Gaussian entanglement between two ground stations. Communication occurs via a satellite over two independent atmospheric fading channels dominated by turbulence-induced beam wander. In our first scheme, the engineering complexity remains largely on the ground transceivers, with the satellite acting simply as a reflector. Although the channel state information of the two atmospheric channels remains unknown in this scheme, the Gaussian entanglement generation between the ground stations can still be determined. On the ground, distillation and Gaussification procedures can be applied, leading to a refined Gaussian entanglement generation rate between the ground stations. We compare the rates produced by this first scheme with two competing schemes in which quantum complexity is added to the satellite, thereby illustrating the tradeoff between space-based engineering complexity and the rate of ground-station entanglement generation.

  13. Revisiting non-Gaussianity from non-attractor inflation models

    Science.gov (United States)

    Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei

    2018-05-01

    Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.

  14. Gaussian limit of compact spin systems

    International Nuclear Information System (INIS)

    Bellissard, J.; Angelis, G.F. de

    1981-01-01

    It is shown that the Wilson and Wilson-Villain U(1) models reproduce, in the low coupling limit, the gaussian lattice approximation of the Euclidean electromagnetic field. By the same methods it is also possible to prove that the plane rotator and the Villain model share a common gaussian behaviour in the low temperature limit. (Auth.)

  15. A Cubature-Principle-Assisted IMM-Adaptive UKF Algorithm for Maneuvering Target Tracking Caused by Sensor Faults

    Directory of Open Access Journals (Sweden)

    Huan Zhou

    2017-09-01

    Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.

  16. Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations

    Directory of Open Access Journals (Sweden)

    Yimeng Zhang

    2013-05-01

    Full Text Available A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.

  17. Gaussian cloning of coherent states with known phases

    International Nuclear Information System (INIS)

    Alexanian, Moorad

    2006-01-01

    The fidelity for cloning coherent states is improved over that provided by optimal Gaussian and non-Gaussian cloners for the subset of coherent states that are prepared with known phases. Gaussian quantum cloning duplicates all coherent states with an optimal fidelity of 2/3. Non-Gaussian cloners give optimal single-clone fidelity for a symmetric 1-to-2 cloner of 0.6826. Coherent states that have known phases can be cloned with a fidelity of 4/5. The latter is realized by a combination of two beam splitters and a four-wave mixer operated in the nonlinear regime, all of which are realized by interaction Hamiltonians that are quadratic in the photon operators. Therefore, the known Gaussian devices for cloning coherent states are extended when cloning coherent states with known phases by considering a nonbalanced beam splitter at the input side of the amplifier

  18. Non-Gaussian Methods for Causal Structure Learning.

    Science.gov (United States)

    Shimizu, Shohei

    2018-05-22

    Causal structure learning is one of the most exciting new topics in the fields of machine learning and statistics. In many empirical sciences including prevention science, the causal mechanisms underlying various phenomena need to be studied. Nevertheless, in many cases, classical methods for causal structure learning are not capable of estimating the causal structure of variables. This is because it explicitly or implicitly assumes Gaussianity of data and typically utilizes only the covariance structure. In many applications, however, non-Gaussian data are often obtained, which means that more information may be contained in the data distribution than the covariance matrix is capable of containing. Thus, many new methods have recently been proposed for using the non-Gaussian structure of data and inferring the causal structure of variables. This paper introduces prevention scientists to such causal structure learning methods, particularly those based on the linear, non-Gaussian, acyclic model known as LiNGAM. These non-Gaussian data analysis tools can fully estimate the underlying causal structures of variables under assumptions even in the presence of unobserved common causes. This feature is in contrast to other approaches. A simulated example is also provided.

  19. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR + , implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR + algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR + implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR + in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR + . The experimental results show that the EKF-GPR + algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR + reduces the patient-wise RMS error to 37%, 39% and 42

  20. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in

  1. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression.

    Science.gov (United States)

    Bukhari, W; Hong, S-M

    2015-01-07

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR(+), implements a gating function without pre-specifying a particular region of the patient's breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR(+) algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR(+) implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR(+) in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR(+). The experimental results show that the EKF-GPR(+) algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR(+) reduces the patient-wise RMS error to 37%, 39% and

  2. Gaussian polynomials and content ideal in trivial extensions

    International Nuclear Information System (INIS)

    Bakkari, C.; Mahdou, N.

    2006-12-01

    The goal of this paper is to exhibit a class of Gaussian non-coherent rings R (with zero-divisors) such that wdim(R) = ∞ and fPdim(R) is always at most one and also exhibits a new class of rings (with zerodivisors) which are neither locally Noetherian nor locally domain where Gaussian polynomials have a locally principal content. For this purpose, we study the possible transfer of the 'Gaussian' property and the property 'the content ideal of a Gaussian polynomial is locally principal' to various trivial extension contexts. This article includes a brief discussion of the scopes and limits of our result. (author)

  3. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Yushan Zhang

    2015-01-01

    Full Text Available Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP. This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.

  4. Quantifying entanglement in two-mode Gaussian states

    Science.gov (United States)

    Tserkis, Spyros; Ralph, Timothy C.

    2017-12-01

    Entangled two-mode Gaussian states are a key resource for quantum information technologies such as teleportation, quantum cryptography, and quantum computation, so quantification of Gaussian entanglement is an important problem. Entanglement of formation is unanimously considered a proper measure of quantum correlations, but for arbitrary two-mode Gaussian states no analytical form is currently known. In contrast, logarithmic negativity is a measure that is straightforward to calculate and so has been adopted by most researchers, even though it is a less faithful quantifier. In this work, we derive an analytical lower bound for entanglement of formation of generic two-mode Gaussian states, which becomes tight for symmetric states and for states with balanced correlations. We define simple expressions for entanglement of formation in physically relevant situations and use these to illustrate the problematic behavior of logarithmic negativity, which can lead to spurious conclusions.

  5. Optimal unitary dilation for bosonic Gaussian channels

    International Nuclear Information System (INIS)

    Caruso, Filippo; Eisert, Jens; Giovannetti, Vittorio; Holevo, Alexander S.

    2011-01-01

    A general quantum channel can be represented in terms of a unitary interaction between the information-carrying system and a noisy environment. In this paper the minimal number of quantum Gaussian environmental modes required to provide a unitary dilation of a multimode bosonic Gaussian channel is analyzed for both pure and mixed environments. We compute this quantity in the case of pure environment corresponding to the Stinespring representation and give an improved estimate in the case of mixed environment. The computations rely, on one hand, on the properties of the generalized Choi-Jamiolkowski state and, on the other hand, on an explicit construction of the minimal dilation for arbitrary bosonic Gaussian channel. These results introduce a new quantity reflecting ''noisiness'' of bosonic Gaussian channels and can be applied to address some issues concerning transmission of information in continuous variables systems.

  6. Graphical calculus for Gaussian pure states

    International Nuclear Information System (INIS)

    Menicucci, Nicolas C.; Flammia, Steven T.; Loock, Peter van

    2011-01-01

    We provide a unified graphical calculus for all Gaussian pure states, including graph transformation rules for all local and semilocal Gaussian unitary operations, as well as local quadrature measurements. We then use this graphical calculus to analyze continuous-variable (CV) cluster states, the essential resource for one-way quantum computing with CV systems. Current graphical approaches to CV cluster states are only valid in the unphysical limit of infinite squeezing, and the associated graph transformation rules only apply when the initial and final states are of this form. Our formalism applies to all Gaussian pure states and subsumes these rules in a natural way. In addition, the term 'CV graph state' currently has several inequivalent definitions in use. Using this formalism we provide a single unifying definition that encompasses all of them. We provide many examples of how the formalism may be used in the context of CV cluster states: defining the 'closest' CV cluster state to a given Gaussian pure state and quantifying the error in the approximation due to finite squeezing; analyzing the optimality of certain methods of generating CV cluster states; drawing connections between this graphical formalism and bosonic Hamiltonians with Gaussian ground states, including those useful for CV one-way quantum computing; and deriving a graphical measure of bipartite entanglement for certain classes of CV cluster states. We mention other possible applications of this formalism and conclude with a brief note on fault tolerance in CV one-way quantum computing.

  7. Mode entanglement of Gaussian fermionic states

    Science.gov (United States)

    Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.

    2018-04-01

    We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.

  8. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  9. Learning conditional Gaussian networks

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers conditional Gaussian networks. The parameters in the network are learned by using conjugate Bayesian analysis. As conjugate local priors, we apply the Dirichlet distribution for discrete variables and the Gaussian-inverse gamma distribution for continuous variables, given...... a configuration of the discrete parents. We assume parameter independence and complete data. Further, to learn the structure of the network, the network score is deduced. We then develop a local master prior procedure, for deriving parameter priors in these networks. This procedure satisfies parameter...... independence, parameter modularity and likelihood equivalence. Bayes factors to be used in model search are introduced. Finally the methods derived are illustrated by a simple example....

  10. Universal dispersion model for characterization of optical thin films over wide spectral range: Application to magnesium fluoride

    Science.gov (United States)

    Franta, Daniel; Nečas, David; Giglia, Angelo; Franta, Pavel; Ohlídal, Ivan

    2017-11-01

    Optical characterization of magnesium fluoride thin films is performed in a wide spectral range from far infrared to extreme ultraviolet (0.01-45 eV) utilizing the universal dispersion model. Two film defects, i.e. random roughness of the upper boundaries and defect transition layer at lower boundary are taken into account. An extension of universal dispersion model consisting in expressing the excitonic contributions as linear combinations of Gaussian and truncated Lorentzian terms is introduced. The spectral dependencies of the optical constants are presented in a graphical form and by the complete set of dispersion parameters that allows generating tabulated optical constants with required range and step using a simple utility in the newAD2 software package.

  11. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  12. Two-photon optics of Bessel-Gaussian modes

    CSIR Research Space (South Africa)

    McLaren, M

    2013-09-01

    Full Text Available In this paper we consider geometrical two-photon optics of Bessel-Gaussian modes generated in spontaneous parametric down-conversion of a Gaussian pump beam. We provide a general theoretical expression for the orbital angular momentum (OAM) spectrum...

  13. Gaussian discriminating strength

    Science.gov (United States)

    Rigovacca, L.; Farace, A.; De Pasquale, A.; Giovannetti, V.

    2015-10-01

    We present a quantifier of nonclassical correlations for bipartite, multimode Gaussian states. It is derived from the Discriminating Strength measure, introduced for finite dimensional systems in Farace et al., [New J. Phys. 16, 073010 (2014), 10.1088/1367-2630/16/7/073010]. As the latter the new measure exploits the quantum Chernoff bound to gauge the susceptibility of the composite system with respect to local perturbations induced by unitary gates extracted from a suitable set of allowed transformations (the latter being identified by posing some general requirements). Closed expressions are provided for the case of two-mode Gaussian states obtained by squeezing or by linearly mixing via a beam splitter a factorized two-mode thermal state. For these density matrices, we study how nonclassical correlations are related with the entanglement present in the system and with its total photon number.

  14. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  15. On signal design by the R sub 0 criterion for non-white Gaussian noise channels

    Science.gov (United States)

    Bordelon, D. L.

    1976-01-01

    The use of the R sub 0 criterion for modulation system design is investigated for channels with non-white Gaussian noise. A signal space representation of the waveform channel is developed, and the cut-off rate R sub 0 for vector channels with additive nonwhite Gaussian noise and unquantized demodulation is derived. When the signal unput to the channel is a continuous random vector, maximization of R sub 0 with constrained average signal energy leads to a water-filling interpretation of optimal energy distribution in signal space. The necessary condition for a finite signal set to maximize R sub 0 with constrained energy and an equally likely probability assignment of signal vectors is presented, and an algorithm is outlined for numerically computing the optimum signal set. A necessary condition on a constrained energy, finite signal set is found which maximizes a Taylor series approximation of R sub 0. This signal set is compared with the finite signal set which has the water-filling average energy distribution.

  16. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  17. An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor

    Directory of Open Access Journals (Sweden)

    He Xu

    2017-08-01

    Full Text Available The Global Positioning System (GPS is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID, etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K-Nearest Neighbor (BKNN. The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.

  18. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  19. Fast Kalman-like filtering for large-dimensional linear and Gaussian state-space models

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-08-13

    This paper considers the filtering problem for linear and Gaussian state-space models with large dimensions, a setup in which the optimal Kalman Filter (KF) might not be applicable owing to the excessive cost of manipulating huge covariance matrices. Among the most popular alternatives that enable cheaper and reasonable computation is the Ensemble KF (EnKF), a Monte Carlo-based approximation. In this paper, we consider a class of a posteriori distributions with diagonal covariance matrices and propose fast approximate deterministic-based algorithms based on the Variational Bayesian (VB) approach. More specifically, we derive two iterative KF-like algorithms that differ in the way they operate between two successive filtering estimates; one involves a smoothing estimate and the other involves a prediction estimate. Despite its iterative nature, the prediction-based algorithm provides a computational cost that is, on the one hand, independent of the number of iterations in the limit of very large state dimensions, and on the other hand, always much smaller than the cost of the EnKF. The cost of the smoothing-based algorithm depends on the number of iterations that may, in some situations, make this algorithm slower than the EnKF. The performances of the proposed filters are studied and compared to those of the KF and EnKF through a numerical example.

  20. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    Science.gov (United States)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  1. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    Science.gov (United States)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  2. Three-dimensional charge dispersion curves from interactions of 11--29 GeV protons with uranium

    International Nuclear Information System (INIS)

    Yu, Y.

    1980-01-01

    Experimental nuclear charge dispersion curves from interactions of 11--29 Gev protons with 238 U have been used in the construction of three-dimensional charge dispersion curves. They show the yield variation with mass number A. Neutron-deficient products are distributed over the entire mass range with a peak at A near 87, while the yield of neutron-excessive products is distributed only in the relatively narrow mass region between A=70 and A=150 and has a maximum around A=115. An isobaric yield curve has been obtained by summing up each of the charge dispersion curves and shows a peak, rather than the flat top, in the mass region A=80 to 140 reported previously. The mass yield curves of neutron-excessive and neutron-deficient products are obtained by a decomposition of the charge dispersion curve with two Gaussians, and the mechanism of formation is suggested

  3. On the Shaker Simulation of Wind-Induced Non-Gaussian Random Vibration

    Directory of Open Access Journals (Sweden)

    Fei Xu

    2016-01-01

    Full Text Available Gaussian signal is produced by ordinary random vibration controllers to test the products in the laboratory, while the field data is usually non-Gaussian. Two methodologies are presented in this paper for shaker simulation of wind-induced non-Gaussian vibration. The first methodology synthesizes the non-Gaussian signal offline and replicates it on the shaker in the Time Waveform Replication (TWR mode. A new synthesis method is used to model the non-Gaussian signal as a Gaussian signal multiplied by an amplitude modulation function (AMF. A case study is presented to show that the synthesized non-Gaussian signal has the same power spectral density (PSD, probability density function (PDF, and loading cycle distribution (LCD as the field data. The second methodology derives a damage equivalent Gaussian signal from the non-Gaussian signal based on the fatigue damage spectrum (FDS and the extreme response spectrum (ERS and reproduces it on the shaker in the closed-loop frequency domain control mode. The PSD level and the duration time of the derived Gaussian signal can be manipulated for accelerated testing purpose. A case study is presented to show that the derived PSD matches the damage potential of the non-Gaussian environment for both fatigue and peak response.

  4. Open Burn/Open Detonation Dispersion Model (OBODM) User's Guide. Volume I. User's Instructions

    National Research Council Canada - National Science Library

    Bjorklund, Jay

    1998-01-01

    ...) of obsolete munitions and solid propellants. OBODM uses loud/plume rise, dispersion, and deposition algorithms taken from existing models for instantaneous and quasi-continuous sources to predict the downwind transport and dispersion...

  5. Vertical dispersion from surface and elevated releases: An investigation of a Non-Gaussian plume model

    International Nuclear Information System (INIS)

    Brown, M.J.; Arya, S.P.; Snyder, W.H.

    1993-01-01

    The vertical diffusion of a passive tracer released from surface and elevated sources in a neutrally stratified boundary layer has been studied by comparing field and laboratory experiments with a non-Gaussian K-theory model that assumes power-law profiles for the mean velocity and vertical eddy diffusivity. Several important differences between model predictions and experimental data were discovered: (1) the model overestimated ground-level concentrations from surface and elevated releases at distances beyond the peak concentration; (2) the model overpredicted vertical mixing near elevated sources, especially in the upward direction; (3) the model-predicted exponent α in the exponential vertical concentration profile for a surface release [bar C(z)∝ exp(-z α )] was smaller than the experimentally measured exponent. Model closure assumptions and experimental short-comings are discussed in relation to their probable effect on model predictions and experimental measurements. 42 refs., 13 figs., 3 tabs

  6. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  7. Optimal multicopy asymmetric Gaussian cloning of coherent states

    International Nuclear Information System (INIS)

    Fiurasek, Jaromir; Cerf, Nicolas J.

    2007-01-01

    We investigate the asymmetric Gaussian cloning of coherent states which produces M copies from N input replicas in such a way that the fidelity of each copy may be different. We show that the optimal asymmetric Gaussian cloning can be performed with a single phase-insensitive amplifier and an array of beam splitters. We obtain a simple analytical expression characterizing the set of optimal asymmetric Gaussian cloning machines and prove the optimality of these cloners using the formalism of Gaussian completely positive maps and semidefinite programming techniques. We also present an alternative implementation of the asymmetric cloning machine where the phase-insensitive amplifier is replaced with a beam splitter, heterodyne detector, and feedforward

  8. Optimal multicopy asymmetric Gaussian cloning of coherent states

    Science.gov (United States)

    Fiurášek, Jaromír; Cerf, Nicolas J.

    2007-05-01

    We investigate the asymmetric Gaussian cloning of coherent states which produces M copies from N input replicas in such a way that the fidelity of each copy may be different. We show that the optimal asymmetric Gaussian cloning can be performed with a single phase-insensitive amplifier and an array of beam splitters. We obtain a simple analytical expression characterizing the set of optimal asymmetric Gaussian cloning machines and prove the optimality of these cloners using the formalism of Gaussian completely positive maps and semidefinite programming techniques. We also present an alternative implementation of the asymmetric cloning machine where the phase-insensitive amplifier is replaced with a beam splitter, heterodyne detector, and feedforward.

  9. A Fast Implicit Finite Difference Method for Fractional Advection-Dispersion Equations with Fractional Derivative Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Taohua Liu

    2017-01-01

    Full Text Available Fractional advection-dispersion equations, as generalizations of classical integer-order advection-dispersion equations, are used to model the transport of passive tracers carried by fluid flow in a porous medium. In this paper, we develop an implicit finite difference method for fractional advection-dispersion equations with fractional derivative boundary conditions. First-order consistency, solvability, unconditional stability, and first-order convergence of the method are proven. Then, we present a fast iterative method for the implicit finite difference scheme, which only requires storage of O(K and computational cost of O(Klog⁡K. Traditionally, the Gaussian elimination method requires storage of O(K2 and computational cost of O(K3. Finally, the accuracy and efficiency of the method are checked with a numerical example.

  10. How Gaussian can our Universe be?

    Energy Technology Data Exchange (ETDEWEB)

    Cabass, G. [Physics Department and INFN, Università di Roma ' ' La Sapienza' ' , P.le Aldo Moro 2, 00185, Rome (Italy); Pajer, E. [Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Schmidt, F., E-mail: giovanni.cabass@roma1.infn.it, E-mail: e.pajer@uu.nl, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-01-01

    Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of k {sub ℓ}{sup 2}/ k {sub s} {sup 2}, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × ( n {sub s}−1).

  11. How Gaussian can our Universe be?

    Science.gov (United States)

    Cabass, G.; Pajer, E.; Schmidt, F.

    2017-01-01

    Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of kl2/ks2, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × (ns-1).

  12. Estimation of the environmental impact of emissions from the La Reina NEC, by atmospheric dispersion modeling

    International Nuclear Information System (INIS)

    Bustamante C, Paula M.; Ortiz R, Marcela A.

    1996-01-01

    Based on a dispersion model, an accidental release of radioactive material to the atmosphere was simulated. To evaluate the consequences of the accidental release it was used the P C COSYMA program (KfK and NRPB). The atmospheric dispersion model was MUSEMET, a segmented Gaussian plume model which requires information on meteorological conditions for a period of one year. This study was carried out to determine the plume's behavior and path, and to define protective actions. The meteorological analysis shows an airflow from the WSW and a channeling flow from the S E at night, due to topographical influences. (author)

  13. Boltzmann-Gaussian transition under specific noise effect

    International Nuclear Information System (INIS)

    Anh, Chu Thuy; Lan, Nguyen Tri; Viet, Nguyen Ai

    2014-01-01

    It is observed that a short time data set of market returns presents almost symmetric Boltzmann distribution whereas a long time data set tends to show a Gaussian distribution. To understand this universal phenomenon, many hypotheses which are spreading in a wide range of interdisciplinary research were proposed. In current work, the effects of background fluctuations on symmetric Boltzmann distribution is investigated. The numerical calculation is performed to show that the Gaussian noise may cause the transition from initial Boltzmann distribution to Gaussian one. The obtained results would reflect non-dynamic nature of the transition under consideration.

  14. Fault Detection for Non-Gaussian Stochastic Systems with Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Tao Li

    2013-01-01

    Full Text Available Fault detection (FD for non-Gaussian stochastic systems with time-varying delay is studied. The available information for the addressed problem is the input and the measured output probability density functions (PDFs of the system. In this framework, firstly, by constructing an augmented Lyapunov functional, which involves some slack variables and a tuning parameter, a delay-dependent condition for the existence of FD observer is derived in terms of linear matrix inequality (LMI and the fault can be detected through a threshold. Secondly, in order to improve the detection sensitivity performance, the optimal algorithm is applied to minimize the threshold value. Finally, paper-making process example is given to demonstrate the applicability of the proposed approach.

  15. Gaussian sum rules for optical functions

    International Nuclear Information System (INIS)

    Kimel, I.

    1981-12-01

    A new (Gaussian) type of sum rules (GSR) for several optical functions, is presented. The functions considered are: dielectric permeability, refractive index, energy loss function, rotatory power and ellipticity (circular dichroism). While reducing to the usual type of sum rules in a certain limit, the GSR contain in general, a Gaussian factor that serves to improve convergence. GSR might be useful in analysing experimental data. (Author) [pt

  16. Gaussian processes for machine learning.

    Science.gov (United States)

    Seeger, Matthias

    2004-04-01

    Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

  17. Anomalous behavior in the third harmonic generation z response through dispersion induced shape changes and matching χ(3)

    Science.gov (United States)

    Pillai, Rajesh S.; Brakenhoff, G. J.; Müller, M.

    2006-09-01

    The third harmonic generation (THG) axial response in the vicinity of an interface formed by two isotropic materials of normal dispersion is typically single peaked, with the maximum intensity at the interface position. Here it is shown experimentally that this THG z response may show anomalous behavior—being double peaked with a dip coinciding with the interface position—when the THG contributions from both materials are of similar magnitude. The observed anomalous behavior is explained, using paraxial Gaussian theory, by considering dispersion induced shape changes in the THG z response.

  18. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This paper reviews useful results related to Palm distributions of spatial point processes and provides a new result regarding the characterization of Palm distributions for the class of log Gaussian Cox processes. This result is used to study functional summary statistics for a log Gaussian Cox...

  19. Improving IUE High Dispersion Extraction

    Science.gov (United States)

    Lawton, Patricia J.; VanSteenberg, M. E.; Massa, D.

    2007-01-01

    We present a different method to extract high dispersion International Ultraviolet Explorer (IUE) spectra from the New Spectral Image Processing System (NEWSIPS) geometrically and photometrically corrected (SI HI) images of the echellogram. The new algorithm corrects many of the deficiencies that exist in the NEWSIPS high dispersion (SIHI) spectra . Specifically, it does a much better job of accounting for the overlap of the higher echelle orders, it eliminates a significant time dependency in the extracted spectra (which can be traced to the background model used in the NEWSIPS extractions), and it can extract spectra from echellogram images that are more highly distorted than the NEWSIPS extraction routines can handle. Together, these improvements yield a set of IUE high dispersion spectra whose scientific integrity is sign ificantly better than the NEWSIPS products. This work has been supported by NASA ADP grants.

  20. Gaussian plume model for the SO{sub 2} in a thermoelectric power plant; Modelo de pluma gaussiano para el SO{sub 2} en una central termoelectrica

    Energy Technology Data Exchange (ETDEWEB)

    Reyes L, C; Munoz Ledo, C R [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1993-12-31

    The Gaussian Plume Model is an analytical extension to simulate the dispersion of the SO{sub 2} concentration at ground level as a function of the emission changes in the spot sources, as well as the pollutant dispersion in the Wind Rose, when the necessary parameters are fed. The model was elaborated in a personal computer and the results produced are generated in text form. [Espanol] El modelo de pluma gaussiano es una extension analitica para simular la dispersion de las concentraciones de SO{sub 2} a nivel del piso en funcion de los cambios de las emisiones en las fuentes puntuales, asi como, la dispersion del contaminante en la rosa de los vientos cuando se le alimentan los parametros necesarios. El modelo fue elaborado en una computadora personal y los resultados que proporciona los genera en modo texto.

  1. Gaussian plume model for the SO{sub 2} in a thermoelectric power plant; Modelo de pluma gaussiano para el SO{sub 2} en una central termoelectrica

    Energy Technology Data Exchange (ETDEWEB)

    Reyes L, C.; Munoz Ledo, C. R. [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1992-12-31

    The Gaussian Plume Model is an analytical extension to simulate the dispersion of the SO{sub 2} concentration at ground level as a function of the emission changes in the spot sources, as well as the pollutant dispersion in the Wind Rose, when the necessary parameters are fed. The model was elaborated in a personal computer and the results produced are generated in text form. [Espanol] El modelo de pluma gaussiano es una extension analitica para simular la dispersion de las concentraciones de SO{sub 2} a nivel del piso en funcion de los cambios de las emisiones en las fuentes puntuales, asi como, la dispersion del contaminante en la rosa de los vientos cuando se le alimentan los parametros necesarios. El modelo fue elaborado en una computadora personal y los resultados que proporciona los genera en modo texto.

  2. Integration of non-Gaussian fields

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Mohr, Gunnar; Hoffmeyer, Pernille

    1996-01-01

    The limitations of the validity of the central limit theorem argument as applied to definite integrals of non-Gaussian random fields are empirically explored by way of examples. The purpose is to investigate in specific cases whether the asymptotic convergence to the Gaussian distribution is fast....... and Randrup-Thomsen, S. Reliability of silo ring under lognormal stochastic pressure using stochastic interpolation. Proc. IUTAM Symp., Probabilistic Structural Mechanics: Advances in Structural Reliability Methods, San Antonio, TX, USA, June 1993 (eds.: P. D. Spanos & Y.-T. Wu) pp. 134-162. Springer, Berlin...

  3. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  4. Dependency of non-homogeneity energy dispersion on absorbance line-shape of luminescent polymers

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Marcelo Castanheira da, E-mail: mar_castanheira@yahoo.com.br [Centro de Ciências Biológicas e da Natureza, Universidade Federal do Acre, CP 500, 69915-900 Rio Branco, AC (Brazil); Instituto de Física, Universidade Federal de Uberlândia, CP 593, 38400-902 Uberlândia, MG (Brazil); Santos Silva, H.; Silva, R.A.; Marletta, Alexandre [Instituto de Física, Universidade Federal de Uberlândia, CP 593, 38400-902 Uberlândia, MG (Brazil)

    2013-01-16

    In this paper, we study the importance of the non-homogeneity energy dispersion on absorption line-shape of luminescent polymers. The optical transition probability was calculated based on the molecular exciton model, Franck–Condon states, Gaussian distribution of non-entangled chains with conjugate degree n, semi-empirical parameterization of energy gap, electric dipole moment, and electron-vibrational mode coupling. Based on the approach of the energy gap functional dependence 1/n, the inclusion of the non-homogeneity energy dispersion 1/n{sup 2} is essential to obtain good experimental data agreement, mainly, where the absorption spectra display peaks width of about 65 meV. For unresolved absorption spectra, such as those observed for a large number of conjugated polymers processed via spin-coating technique, for example, the non-homogeneity energy dispersion parameterization is not significant. Results were supported by the application of the model for poly (p-phenylene vinylene) films.

  5. Multivariate spatial Gaussian mixture modeling for statistical clustering of hemodynamic parameters in functional MRI

    International Nuclear Information System (INIS)

    Fouque, A.L.; Ciuciu, Ph.; Risser, L.; Fouque, A.L.; Ciuciu, Ph.; Risser, L.

    2009-01-01

    In this paper, a novel statistical parcellation of intra-subject functional MRI (fMRI) data is proposed. The key idea is to identify functionally homogenous regions of interest from their hemodynamic parameters. To this end, a non-parametric voxel-based estimation of hemodynamic response function is performed as a prerequisite. Then, the extracted hemodynamic features are entered as the input data of a Multivariate Spatial Gaussian Mixture Model (MSGMM) to be fitted. The goal of the spatial aspect is to favor the recovery of connected components in the mixture. Our statistical clustering approach is original in the sense that it extends existing works done on univariate spatially regularized Gaussian mixtures. A specific Gibbs sampler is derived to account for different covariance structures in the feature space. On realistic artificial fMRI datasets, it is shown that our algorithm is helpful for identifying a parsimonious functional parcellation required in the context of joint detection estimation of brain activity. This allows us to overcome the classical assumption of spatial stationarity of the BOLD signal model. (authors)

  6. Estimators for local non-Gaussianities

    International Nuclear Information System (INIS)

    Creminelli, P.; Senatore, L.; Zaldarriaga, M.

    2006-05-01

    We study the Likelihood function of data given f NL for the so-called local type of non-Gaussianity. In this case the curvature perturbation is a non-linear function, local in real space, of a Gaussian random field. We compute the Cramer-Rao bound for f NL and show that for small values of f NL the 3- point function estimator saturates the bound and is equivalent to calculating the full Likelihood of the data. However, for sufficiently large f NL , the naive 3-point function estimator has a much larger variance than previously thought. In the limit in which the departure from Gaussianity is detected with high confidence, error bars on f NL only decrease as 1/ln N pix rather than N pix -1/2 as the size of the data set increases. We identify the physical origin of this behavior and explain why it only affects the local type of non- Gaussianity, where the contribution of the first multipoles is always relevant. We find a simple improvement to the 3-point function estimator that makes the square root of its variance decrease as N pix -1/2 even for large f NL , asymptotically approaching the Cramer-Rao bound. We show that using the modified estimator is practically equivalent to computing the full Likelihood of f NL given the data. Thus other statistics of the data, such as the 4-point function and Minkowski functionals, contain no additional information on f NL . In particular, we explicitly show that the recent claims about the relevance of the 4-point function are not correct. By direct inspection of the Likelihood, we show that the data do not contain enough information for any statistic to be able to constrain higher order terms in the relation between the Gaussian field and the curvature perturbation, unless these are orders of magnitude larger than the size suggested by the current limits on f NL . (author)

  7. A comparison on the propagation characteristics of focused Gaussian beam and fundamental Gaussian beam in vacuum

    International Nuclear Information System (INIS)

    Liu Shixiong; Guo Hong; Liu Mingwei; Wu Guohua

    2004-01-01

    Propagation characteristics of focused Gaussian beam (FoGB) and fundamental Gaussian beam (FuGB) propagating in vacuum are investigated. Based on the Fourier transform and the angular spectral analysis, the transverse component and the second-order approximate longitudinal component of the electric field are obtained in the paraxial approximation. The electric field components, the phase velocity and the group velocity of FoGB are compared with those of FuGB. The spot size of FoGB is also discussed

  8. Calculating emittance for Gaussian and Non-Gaussian distributions by the method of correlations for slits

    International Nuclear Information System (INIS)

    Tan, Cheng-Yang; Fermilab

    2006-01-01

    One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons

  9. Non-Gaussianity in a quasiclassical electronic circuit

    Science.gov (United States)

    Suzuki, Takafumi J.; Hayakawa, Hisao

    2017-05-01

    We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.

  10. Imprint of primordial non-Gaussianity on dark matter halo profiles

    Energy Technology Data Exchange (ETDEWEB)

    Dizgah, Azadeh Moradinezhad; Dodelson, Scott; Riotto, Antonio

    2013-09-01

    We study the impact of primordial non-Gaussianity on the density profile of dark matter halos by using the semi-analytical model introduced recently by Dalal {\\it et al.} which relates the peaks of the initial linear density field to the final density profile of dark matter halos. Models with primordial non-Gaussianity typically produce an initial density field that differs from that produced in Gaussian models. We use the path-integral formulation of excursion set theory to calculate the non-Gaussian corrections to the peak profile and derive the statistics of the peaks of non-Gaussian density field. In the context of the semi-analytic model for halo profiles, currently allowed values for primordial non-Gaussianity would increase the shapes of the inner dark matter profiles, but only at the sub-percent level except in the very innermost regions.

  11. On the dependence structure of Gaussian queues

    NARCIS (Netherlands)

    Es-Saghouani, A.; Mandjes, M.R.H.

    2009-01-01

    In this article we study Gaussian queues (that is, queues fed by Gaussian processes, such as fractional Brownian motion (fBm) and the integrated Ornstein-Uhlenbeck (iOU) process), with a focus on the dependence structure of the workload process. The main question is to what extent does the workload

  12. Shedding new light on Gaussian harmonic analysis

    NARCIS (Netherlands)

    Teuwen, J.J.B.

    2016-01-01

    This dissertation consists out of two rather disjoint parts. One part concerns some results on Gaussian harmonic analysis and the other on an optimization problem in optics. In the first part we study the Ornstein–Uhlenbeck process with respect to the Gaussian measure. We focus on two areas. One is

  13. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    Directory of Open Access Journals (Sweden)

    Georgios C Manikis

    Full Text Available The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer.Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2 at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG and non-Gaussian (MNG and BNG were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE. To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC and F-ratio.All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area.No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  14. Unifying Pore Network Modeling, Continuous Time Random Walk (CTRW) Theory and Experiment to Describe Impact of Spatial Heterogeneities on Solute Dispersion at Multiple Length-scales

    Science.gov (United States)

    Bijeljic, B.; Blunt, M. J.; Rhodes, M. E.

    2009-04-01

    This talk will describe and highlight the advantages offered by a novel methodology that unifies pore network modeling, CTRW theory and experiment in description of solute dispersion in porous media. Solute transport in a porous medium is characterized by the interplay of advection and diffusion (described by Peclet number, Pe) that cause dispersion of solute particles. Dispersion is traditionally described by dispersion coefficients, D, that are commonly calculated from the spatial moments of the plume. Using a pore-scale network model based on particle tracking, the rich Peclet-number dependence of dispersion coefficient is predicted from first principles and is shown to compare well with experimental data for restricted diffusion, transition, power-law and mechanical dispersion regimes in the asymptotic limit. In the asymptotic limit D is constant and can be used in an averaged advection-dispersion equation. However, it is highly important to recognize that, until the velocity field is fully sampled, the particle transport is non-Gaussian and D possesses temporal or spatial variation. Furthermore, temporal probability density functions (PDF) of tracer particles are studied in pore networks and an excellent agreement for the spectrum of transition times for particles from pore to pore is obtained between network model results and CTRW theory. Based on the truncated power-law interpretation of PDF-s, the physical origin of the power-law scaling of dispersion coefficient vs. Peclet number has been explained for unconsolidated porous media, sands and a number of sandstones, arriving at the same conclusion from numerical network modelling, analytic CTRW theory and experiment. The length traveled by solute plumes before Gaussian behaviour is reached increases with an increase in heterogeneity and/or Pe. This opens up the question on the nature of dispersion in natural systems where the heterogeneities at the larger scales will significantly increase the range of

  15. Representation of Gaussian semimartingales with applications to the covariance function

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2010-01-01

    stationary Gaussian semimartingales and their canonical decomposition. Thirdly, we give a new characterization of the covariance function of Gaussian semimartingales, which enable us to characterize the class of martingales and the processes of bounded variation among the Gaussian semimartingales. We...

  16. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  17. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  18. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    Science.gov (United States)

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Efficiency of the human observer for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds.

    Science.gov (United States)

    Park, Subok; Gallas, Bradon D; Badano, Aldo; Petrick, Nicholas A; Myers, Kyle J

    2007-04-01

    A previous study [J. Opt. Soc. Am. A22, 3 (2005)] has shown that human efficiency for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds is approximately 4%. This human efficiency is much less than the reported 40% efficiency that has been documented for Gaussian-distributed lumpy backgrounds [J. Opt. Soc. Am. A16, 694 (1999) and J. Opt. Soc. Am. A18, 473 (2001)]. We conducted a psychophysical study with a number of changes, specifically in display-device calibration and data scaling, from the design of the aforementioned study. Human efficiency relative to the ideal observer was found again to be approximately 5%. Our variance analysis indicates that neither scaling nor display made a statistically significant difference in human performance for the task. We conclude that the non-Gaussian distributed lumpy background is a major factor in our low human-efficiency results.

  20. Limit theorems for functionals of Gaussian vectors

    Institute of Scientific and Technical Information of China (English)

    Hongshuai DAI; Guangjun SHEN; Lingtao KONG

    2017-01-01

    Operator self-similar processes,as an extension of self-similar processes,have been studied extensively.In this work,we study limit theorems for functionals of Gaussian vectors.Under some conditions,we determine that the limit of partial sums of functionals of a stationary Gaussian sequence of random vectors is an operator self-similar process.

  1. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system

    Science.gov (United States)

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.

  2. Evaluation of Gaussian approximations for data assimilation in reservoir models

    KAUST Repository

    Iglesias, Marco A.

    2013-07-14

    The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our

  3. Uranium Dispersion and Dosimetry (UDAD) Code

    International Nuclear Information System (INIS)

    Momeni, M.H.; Yuan, Y.; Zielen, A.J.

    1979-05-01

    The Uranium Dispersion and Dosimetry (UDAD) Code provides estimates of potential radiation exposure to individuals and to the general population in the vicinity of a uranium processing facility. The UDAD Code incorporates the radiation dose from the airborne release of radioactive materials, and includes dosimetry of inhalation, ingestion, and external exposures. The removal of raioactive particles from a contaminated area by wind action is estimated, atmospheric concentrations of radioactivity from specific sources are calculated, and source depletion as a result of deposition, fallout, and ingrowth of radon daughters are included in a sector-averaged Gaussian plume dispersion model. The average air concentration at any given receptor location is assumed to be constant during each annual release period, but to increase from year to year because of resuspension. Surface contamination and deposition velocity are estimated. Calculation of the inhalation dose and dose rate to an individual is based on the ICRP Task Group Lung Model. Estimates of the dose to the bronchial epithelium of the lung from inhalation of radon and its short-lived daughters are calculated based on a dose conversion factor from the BEIR report. External radiation exposure includes radiation from airborne radionuclides and exposure to radiation from contaminated ground. Terrestrial food pathways include vegetation, meat, milk, poultry, and eggs. Internal dosimetry is based on ICRP recommendations. In addition, individual dose commitments, population dose commitments, and environmental dose commitments are computed. This code also may be applied to dispersion of any other pollutant

  4. Consistency relations for sharp inflationary non-Gaussian features

    Energy Technology Data Exchange (ETDEWEB)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris [Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, Blanco Encalada 2008, Santiago (Chile); Soto, Alex, E-mail: sander.mooij@ing.uchile.cl, E-mail: gpalmaquilod@ing.uchile.cl, E-mail: gpanotop@ing.uchile.cl, E-mail: gatogeno@gmail.com [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Las Palmeras 3425, Ñuñoa, Santiago (Chile)

    2016-09-01

    If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming from the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.

  5. Consistency relations for sharp inflationary non-Gaussian features

    International Nuclear Information System (INIS)

    Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris; Soto, Alex

    2016-01-01

    If cosmic inflation suffered tiny time-dependent deviations from the slow-roll regime, these would induce the existence of small scale-dependent features imprinted in the primordial spectra, with their shapes and sizes revealing information about the physics that produced them. Small sharp features could be suppressed at the level of the two-point correlation function, making them undetectable in the power spectrum, but could be amplified at the level of the three-point correlation function, offering us a window of opportunity to uncover them in the non-Gaussian bispectrum. In this article, we show that sharp features may be analyzed using only data coming from the three point correlation function parametrizing primordial non-Gaussianity. More precisely, we show that if features appear in a particular non-Gaussian triangle configuration (e.g. equilateral, folded, squeezed), these must reappear in every other configuration according to a specific relation allowing us to correlate features across the non-Gaussian bispectrum. As a result, we offer a method to study scale-dependent features generated during inflation that depends only on data coming from measurements of non-Gaussianity, allowing us to omit data from the power spectrum.

  6. Uncertainty-based simulation-optimization using Gaussian process emulation: Application to coastal groundwater management

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ketabchi, Hamed

    2017-12-01

    Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.

  7. Noise filtering algorithm for the MFTF-B computer based control system

    International Nuclear Information System (INIS)

    Minor, E.G.

    1983-01-01

    An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions

  8. Comparing Fixed and Variable-Width Gaussian Networks

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Kainen, P.C.

    2014-01-01

    Roč. 57, September (2014), s. 23-28 ISSN 0893-6080 R&D Projects: GA MŠk(CZ) LD13002 Institutional support: RVO:67985807 Keywords : Gaussian radial and kernel networks * Functionally equivalent networks * Universal approximators * Stabilizers defined by Gaussian kernels * Argminima of error functionals Subject RIV: IN - Informatics, Computer Science Impact factor: 2.708, year: 2014

  9. Uncertainty in perception and the Hierarchical Gaussian Filter

    Directory of Open Access Journals (Sweden)

    Christoph Daniel Mathys

    2014-11-01

    Full Text Available In its full sense, perception rests on an agent’s model of how its sensory input comes about and the inferences it draws based on this model. These inferences are necessarily uncertain. Here, we illustrate how the hierarchical Gaussian filter (HGF offers a principled and generic way to deal with the several forms that uncertainty in perception takes. The HGF is a recent derivation of one-step update equations from Bayesian principles that rests on a hierarchical generative model of the environment and its (instability. It is computationally highly efficient, allows for online estimates of hidden states, and has found numerous applications to experimental data from human subjects. In this paper, we generalize previous descriptions of the HGF and its account of perceptual uncertainty. First, we explicitly formulate the extension of the HGF’s hierarchy to any number of levels; second, we discuss how various forms of uncertainty are accommodated by the minimization of variational free energy as encoded in the update equations; third, we combine the HGF with decision models and demonstrate the inversion of this combination; finally, we report a simulation study that compared four optimization methods for inverting the HGF/decision model combination at different noise levels. These four methods (Nelder-Mead simplex algorithm, Gaussian process-based global optimization, variational Bayes and Markov chain Monte Carlo sampling all performed well even under considerable noise, with variational Bayes offering the best combination of efficiency and informativeness of inference. Our results demonstrate that the HGF provides a principled, flexible, and efficient - but at the same time intuitive - framework for the resolution of perceptual uncertainty in behaving agents.

  10. The Prediction of Length-of-day Variations Based on Gaussian Processes

    Science.gov (United States)

    Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.

    2015-01-01

    Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.

  11. Inverse modelling of atmospheric tracers: non-Gaussian methods and second-order sensitivity analysis

    Directory of Open Access Journals (Sweden)

    M. Bocquet

    2008-02-01

    Full Text Available For a start, recent techniques devoted to the reconstruction of sources of an atmospheric tracer at continental scale are introduced. A first method is based on the principle of maximum entropy on the mean and is briefly reviewed here. A second approach, which has not been applied in this field yet, is based on an exact Bayesian approach, through a maximum a posteriori estimator. The methods share common grounds, and both perform equally well in practice. When specific prior hypotheses on the sources are taken into account such as positivity, or boundedness, both methods lead to purposefully devised cost-functions. These cost-functions are not necessarily quadratic because the underlying assumptions are not Gaussian. As a consequence, several mathematical tools developed in data assimilation on the basis of quadratic cost-functions in order to establish a posteriori analysis, need to be extended to this non-Gaussian framework. Concomitantly, the second-order sensitivity analysis needs to be adapted, as well as the computations of the averaging kernels of the source and the errors obtained in the reconstruction. All of these developments are applied to a real case of tracer dispersion: the European Tracer Experiment [ETEX]. Comparisons are made between a least squares cost function (similar to the so-called 4D-Var approach and a cost-function which is not based on Gaussian hypotheses. Besides, the information content of the observations which is used in the reconstruction is computed and studied on the application case. A connection with the degrees of freedom for signal is also established. As a by-product of these methodological developments, conclusions are drawn on the information content of the ETEX dataset as seen from the inverse modelling point of view.

  12. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    Science.gov (United States)

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  13. Control method for multi-input multi-output non-Gaussian random vibration test with cross spectra consideration

    Directory of Open Access Journals (Sweden)

    Ronghui ZHENG

    2017-12-01

    Full Text Available A control method for Multi-Input Multi-Output (MIMO non-Gaussian random vibration test with cross spectra consideration is proposed in the paper. The aim of the proposed control method is to replicate the specified references composed of auto spectral densities, cross spectral densities and kurtoses on the test article in the laboratory. It is found that the cross spectral densities will bring intractable coupling problems and induce difficulty for the control of the multi-output kurtoses. Hence, a sequential phase modification method is put forward to solve the coupling problems in multi-input multi-output non-Gaussian random vibration test. To achieve the specified responses, an improved zero memory nonlinear transformation is utilized first to modify the Fourier phases of the signals with sequential phase modification method to obtain one frame reference response signals which satisfy the reference spectra and reference kurtoses. Then, an inverse system method is used in frequency domain to obtain the continuous stationary drive signals. At the same time, the matrix power control algorithm is utilized to control the spectra and kurtoses of the response signals further. At the end of the paper, a simulation example with a cantilever beam and a vibration shaker test are implemented and the results support the proposed method very well. Keywords: Cross spectra, Kurtosis control, Multi-input multi-output, Non-Gaussian, Random vibration test

  14. A Fast Detection Algorithm for the X-Ray Pulsar Signal

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2017-01-01

    Full Text Available The detection of the X-ray pulsar signal is important for the autonomous navigation system using X-ray pulsars. In the condition of short observation time and limited number of photons for detection, the noise does not obey the Gaussian distribution. This fact has been little considered extant. In this paper, the model of the X-ray pulsar signal is rebuilt as the nonhomogeneous Poisson distribution and, in the condition of a fixed false alarm rate, a fast detection algorithm based on maximizing the detection probability is proposed. Simulation results show the effectiveness of the proposed detection algorithm.

  15. Computational algorithms for simulations in atmospheric optics.

    Science.gov (United States)

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  16. Gaussian Process-Mixture Conditional Heteroscedasticity.

    Science.gov (United States)

    Platanios, Emmanouil A; Chatzis, Sotirios P

    2014-05-01

    Generalized autoregressive conditional heteroscedasticity (GARCH) models have long been considered as one of the most successful families of approaches for volatility modeling in financial return series. In this paper, we propose an alternative approach based on methodologies widely used in the field of statistical machine learning. Specifically, we propose a novel nonparametric Bayesian mixture of Gaussian process regression models, each component of which models the noise variance process that contaminates the observed data as a separate latent Gaussian process driven by the observed data. This way, we essentially obtain a Gaussian process-mixture conditional heteroscedasticity (GPMCH) model for volatility modeling in financial return series. We impose a nonparametric prior with power-law nature over the distribution of the model mixture components, namely the Pitman-Yor process prior, to allow for better capturing modeled data distributions with heavy tails and skewness. Finally, we provide a copula-based approach for obtaining a predictive posterior for the covariances over the asset returns modeled by means of a postulated GPMCH model. We evaluate the efficacy of our approach in a number of benchmark scenarios, and compare its performance to state-of-the-art methodologies.

  17. Plume dispersion and deposition processes of tracer gas and aerosols in short-distance experiments

    International Nuclear Information System (INIS)

    Taeschner, M.; Bunnenberg, C.

    1988-01-01

    Data used in this paper were extracted from field experiments carried out in France and Canada to study the pathway of elementary tritium after possible emissions from future fusion reactors and from short-range experiments with nutrient aerosols performed in a German forest in view of a therapy of damaged coniferous trees by foliar nutrition. Comparisons of dispersion parameters evaluated from the tritium field experiments show that in the case of the 30-min release the variations of the wind directions represent the dominant mechanism of lateral plume dispersion under unstable weather conditions. This corresponds with the observation that for the short 2-min emission the plume remains more concentrated during propagation, and the small lateral dispersion parameters typical for stable conditions have to be applied. The investigations on the dispersion of aerosol plumes into a forest boundary layer show that the Gaussian plume model can be modified by a windspeed factor to be valid for predictions on aerosol concentrations and depositions even in a structured topography like a forest

  18. A model for short and medium range dispersion of radionuclides released to the atmosphere

    International Nuclear Information System (INIS)

    Clarke, R.H.

    1979-09-01

    A Working Group was established to give practical guidance on the estimation of the dispersion of radioactive releases to the atmosphere. The dispersion is estimated in the short and medium range, that is from about 100 m to a few tens of kilometres from the source, and is based upon a Gaussian plume model. A scheme is presented for categorising atmospheric conditions and values of the associated dispersion parameters are given. Typical results are presented for releases in specific meteorological conditions and a scheme is included to allow for durations of release of up to 24 hours. Consideration has also been given to predicting longer term average concentrations, typically annual averages, and results are presented which facilitate site specific calculations. The results of the models are extended to 100 km from the source, but the increasing uncertainty with which results may be predicted beyond a few tens of kilometres from the source is emphasised. Three technical appendices provide some of the rationale behind the decisions made in adopting the various models in the proposed dispersion scheme. (author)

  19. Hydrodynamic dispersion

    International Nuclear Information System (INIS)

    Pryce, M.H.L.

    1985-01-01

    A dominant mechanism contributing to hydrodynamic dispersion in fluid flow through rocks is variation of travel speeds within the channels carrying the fluid, whether these be interstices between grains, in granular rocks, or cracks in fractured crystalline rocks. The complex interconnections of the channels ensure a mixing of those parts of the fluid which travel more slowly and those which travel faster. On a macroscopic scale this can be treated statistically in terms of the distribution of times taken by a particle of fluid to move from one surface of constant hydraulic potential to another, lower, potential. The distributions in the individual channels are such that very long travel times make a very important contribution. Indeed, while the mean travel time is related to distance by a well-defined transport speed, the mean square is effectively infinite. This results in an asymmetrical plume which differs markedly from a gaussian shape. The distribution of microscopic travel times is related to the distribution of apertures in the interstices, or in the microcracks, which in turn are affected in a complex way by the stresses acting on the rock matrix

  20. New Information Dispersal Techniques for Trustworthy Computing

    Science.gov (United States)

    Parakh, Abhishek

    2011-01-01

    Information dispersal algorithms (IDA) are used for distributed data storage because they simultaneously provide security, reliability and space efficiency, constituting a trustworthy computing framework for many critical applications, such as cloud computing, in the information society. In the most general sense, this is achieved by dividing data…

  1. Resonant non-Gaussianity with equilateral properties

    International Nuclear Information System (INIS)

    Gwyn, Rhiannon; Rummel, Markus

    2012-11-01

    We discuss the effect of superimposing multiple sources of resonant non-Gaussianity, which arise for instance in models of axion inflation. The resulting sum of oscillating shape contributions can be used to ''Fourier synthesize'' different non-oscillating shapes in the bispectrum. As an example we reproduce an approximately equilateral shape from the superposition of O(10) oscillatory contributions with resonant shape. This implies a possible degeneracy between the equilateral-type non-Gaussianity typical of models with non-canonical kinetic terms, such as DBI inflation, and an equilateral-type shape arising from a superposition of resonant-type contributions in theories with canonical kinetic terms. The absence of oscillations in the 2-point function together with the structure of the resonant N-point functions, imply that detection of equilateral non-Gaussianity at a level greater than the PLANCK sensitivity of f NL ∝O(5) will rule out a resonant origin. We comment on the questions arising from possible embeddings of this idea in a string theory setting.

  2. Non-Gaussian conductivity fluctuations in semiconductors

    International Nuclear Information System (INIS)

    Melkonyan, S.V.

    2010-01-01

    A theoretical study is presented on the statistical properties of conductivity fluctuations caused by concentration and mobility fluctuations of the current carriers. It is established that mobility fluctuations result from random deviations in the thermal equilibrium distribution of the carriers. It is shown that mobility fluctuations have generation-recombination and shot components which do not satisfy the requirements of the central limit theorem, in contrast to the current carrier's concentration fluctuation and intraband component of the mobility fluctuation. It is shown that in general the mobility fluctuation consist of thermal (or intraband) Gaussian and non-thermal (or generation-recombination, shot, etc.) non-Gaussian components. The analyses of theoretical results and experimental data from literature show that the statistical properties of mobility fluctuation and of 1/f-noise fully coincide. The deviation from Gaussian statistics of the mobility or 1/f fluctuations goes hand in hand with the magnitude of non-thermal noise (generation-recombination, shot, burst, pulse noises, etc.).

  3. The simulation of solute transport: An approach free of numerical dispersion

    International Nuclear Information System (INIS)

    Carrera, J.; Melloni, G.

    1987-01-01

    The applicability of most algorithms for simulation of solute transport is limited either by instability or by numerical dispersion, as seen by a review of existing methods. A new approach is proposed that is free of these two problems. The method is based on the mixed Eulerian-Lagrangian formulation of the mass-transport problem, thus ensuring stability. Advection is simulated by a variation of reverse-particle tracking that avoids the accumulation of interpolation errors, thus preventing numerical dispersion. The algorithm has been implemented in a one-dimensional code. Excellent results are obtained, in comparison with an analytical solution. 36 refs., 14 figs., 1 tab

  4. Legendre Duality of Spherical and Gaussian Spin Glasses

    Energy Technology Data Exchange (ETDEWEB)

    Genovese, Giuseppe, E-mail: giuseppe.genovese@math.uzh.ch [Universität Zürich, Institut für Mathematik (Switzerland); Tantari, Daniele, E-mail: daniele.tantari@sns.it [Scuola Normale Superiore di Pisa, Centro Ennio de Giorgi (Italy)

    2015-12-15

    The classical result of concentration of the Gaussian measure on the sphere in the limit of large dimension induces a natural duality between Gaussian and spherical models of spin glass. We analyse the Legendre variational structure linking the free energies of these two systems, in the spirit of the equivalence of ensembles of statistical mechanics. Our analysis, combined with the previous work (Barra et al., J. Phys. A: Math. Theor. 47, 155002, 2014), shows that such models are replica symmetric. Lastly, we briefly discuss an application of our result to the study of the Gaussian Hopfield model.

  5. Legendre Duality of Spherical and Gaussian Spin Glasses

    International Nuclear Information System (INIS)

    Genovese, Giuseppe; Tantari, Daniele

    2015-01-01

    The classical result of concentration of the Gaussian measure on the sphere in the limit of large dimension induces a natural duality between Gaussian and spherical models of spin glass. We analyse the Legendre variational structure linking the free energies of these two systems, in the spirit of the equivalence of ensembles of statistical mechanics. Our analysis, combined with the previous work (Barra et al., J. Phys. A: Math. Theor. 47, 155002, 2014), shows that such models are replica symmetric. Lastly, we briefly discuss an application of our result to the study of the Gaussian Hopfield model

  6. Methods to characterize non-Gaussian noise in TAMA

    International Nuclear Information System (INIS)

    Ando, Masaki; Arai, K; Takahashi, R; Tatsumi, D; Beyersdorf, P; Kawamura, S; Miyoki, S; Mio, N; Moriwaki, S; Numata, K; Kanda, N; Aso, Y; Fujimoto, M-K; Tsubono, K; Kuroda, K

    2003-01-01

    We present a data characterization method for the main output signal of the interferometric gravitational-wave detector, in particular targeting at effective detection of burst gravitational waves from stellar core collapse. The time scale of non-Gaussian events is evaluated in this method, and events with longer time scale than real signals are rejected as non-Gaussian noises. As a result of data analysis using 1000 h of real data with the interferometric gravitational-wave detector TAMA300, the false-alarm rate was improved 10 3 times with this non-Gaussian noise evaluation and rejection method

  7. Comparison of non-Gaussian and Gaussian diffusion models of diffusion weighted imaging of rectal cancer at 3.0 T MRI.

    Science.gov (United States)

    Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong

    2016-12-09

    Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2  = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.

  8. A model for the calculation of dispersion, advection and deposition of polluants in the atmosphere

    International Nuclear Information System (INIS)

    Doron, E.

    1981-08-01

    A numerical model for the prediction of atmospheric pollutants concentrations as a function of time and location is described. The model includes effects of dispersion, advection and deposition of the pollutant. Topographic influences are included through the introduction of a terrain following vertical coordinate. The wind field, needed for the calculation of the advection, is obtained from a time series of objective analysis of actual wind measurements. A unique feature of the model is the use of the logarithm of the concentration as the predicted variable. For a concentration distribution close to Gaussian, the distribution of this variable is close to parabolic. Thus, a polynomial of low order can be fitted to the distribution and then used for the calculation of derivatives of the advection and diffusion terms with great accuracy. The fitting method used was the cubic splines method. Initial experiments with the method included tests of the interpolation methods, which were found to be very accurate, and a few dispersion and advection experiments designed for an initial check of the influence of vertical wind shear, topography and changes of wind speed and direction with time. The results of these experiments show that the model has a marked advantage over the Gaussian model but its use requires more advanced computing facilities. (author)

  9. Scalable Gaussian Processes and the search for exoplanets

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Gaussian Processes are a class of non-parametric models that are often used to model stochastic behavior in time series or spatial data. A major limitation for the application of these models to large datasets is the computational cost. The cost of a single evaluation of the model likelihood scales as the third power of the number of data points. In the search for transiting exoplanets, the datasets of interest have tens of thousands to millions of measurements with uneven sampling, rendering naive application of a Gaussian Process model impractical. To attack this problem, we have developed robust approximate methods for Gaussian Process regression that can be applied at this scale. I will describe the general problem of Gaussian Process regression and offer several applicable use cases. Finally, I will present our work on scaling this model to the exciting field of exoplanet discovery and introduce a well-tested open source implementation of these new methods.

  10. Realistic continuous-variable quantum teleportation with non-Gaussian resources

    International Nuclear Information System (INIS)

    Dell'Anno, F.; De Siena, S.; Illuminati, F.

    2010-01-01

    We present a comprehensive investigation of nonideal continuous-variable quantum teleportation implemented with entangled non-Gaussian resources. We discuss in a unified framework the main decoherence mechanisms, including imperfect Bell measurements and propagation of optical fields in lossy fibers, applying the formalism of the characteristic function. By exploiting appropriate displacement strategies, we compute analytically the success probability of teleportation for input coherent states and two classes of non-Gaussian entangled resources: two-mode squeezed Bell-like states (that include as particular cases photon-added and photon-subtracted de-Gaussified states), and two-mode squeezed catlike states. We discuss the optimization procedure on the free parameters of the non-Gaussian resources at fixed values of the squeezing and of the experimental quantities determining the inefficiencies of the nonideal protocol. It is found that non-Gaussian resources enhance significantly the efficiency of teleportation and are more robust against decoherence than the corresponding Gaussian ones. Partial information on the alphabet of input states allows further significant improvement in the performance of the nonideal teleportation protocol.

  11. Generation of Quasi-Gaussian Pulses Based on Correlation Techniques

    Directory of Open Access Journals (Sweden)

    POHOATA, S.

    2012-02-01

    Full Text Available The Gaussian pulses have been mostly used within communications, where some applications can be emphasized: mobile telephony (GSM, where GMSK signals are used, as well as the UWB communications, where short-period pulses based on Gaussian waveform are generated. Since the Gaussian function signifies a theoretical concept, which cannot be accomplished from the physical point of view, this should be expressed by using various functions, able to determine physical implementations. New techniques of generating the Gaussian pulse responses of good precision are approached, proposed and researched in this paper. The second and third order derivatives with regard to the Gaussian pulse response are accurately generated. The third order derivates is composed of four individual rectangular pulses of fixed amplitudes, being easily to be generated by standard techniques. In order to generate pulses able to satisfy the spectral mask requirements, an adequate filter is necessary to be applied. This paper emphasizes a comparative analysis based on the relative error and the energy spectra of the proposed pulses.

  12. Quantifying predictability through information theory: small sample estimation in a non-Gaussian framework

    International Nuclear Information System (INIS)

    Haven, Kyle; Majda, Andrew; Abramov, Rafail

    2005-01-01

    Many situations in complex systems require quantitative estimates of the lack of information in one probability distribution relative to another. In short term climate and weather prediction, examples of these issues might involve the lack of information in the historical climate record compared with an ensemble prediction, or the lack of information in a particular Gaussian ensemble prediction strategy involving the first and second moments compared with the non-Gaussian ensemble itself. The relative entropy is a natural way to quantify the predictive utility in this information, and recently a systematic computationally feasible hierarchical framework has been developed. In practical systems with many degrees of freedom, computational overhead limits ensemble predictions to relatively small sample sizes. Here the notion of predictive utility, in a relative entropy framework, is extended to small random samples by the definition of a sample utility, a measure of the unlikeliness that a random sample was produced by a given prediction strategy. The sample utility is the minimum predictability, with a statistical level of confidence, which is implied by the data. Two practical algorithms for measuring such a sample utility are developed here. The first technique is based on the statistical method of null-hypothesis testing, while the second is based upon a central limit theorem for the relative entropy of moment-based probability densities. These techniques are tested on known probability densities with parameterized bimodality and skewness, and then applied to the Lorenz '96 model, a recently developed 'toy' climate model with chaotic dynamics mimicking the atmosphere. The results show a detection of non-Gaussian tendencies of prediction densities at small ensemble sizes with between 50 and 100 members, with a 95% confidence level

  13. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    Energy Technology Data Exchange (ETDEWEB)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R. [Universite de Provence (PIIM), Centre de Saint-Jerome, 13 - Marseille (France); Capes, H.; Guirlet, R. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2003-07-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D{sub {alpha}} line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior.

  14. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    International Nuclear Information System (INIS)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R.; Capes, H.; Guirlet, R.

    2003-01-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D α line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior

  15. Optimization of broadband semiconductor chirped mirrors with genetic algorithm

    OpenAIRE

    Dems, M.; Wnuk, P.; Wasylczyk, P.; Zinkiewicz, L.; Wojcik-Jedlinska, A.; Reginski, K.; Hejduk, K.; Jasik, A.

    2016-01-01

    Genetic algorithm was applied for optimization of dispersion properties in semiconductor Bragg reflectors for applications in femtosecond lasers. Broadband, large negative group-delay dispersion was achieved in the optimized design: The group-delay dispersion (GDD) as large as −3500 fs2 was theoretically obtained over a 10-nm bandwidth. The designed structure was manufactured and tested, providing GDD −3320 fs2 over a 7-nm bandwidth. The mirror performance was ...

  16. Sensitivity analysis of an operational advanced Gaussian model to different turbulent regimes

    International Nuclear Information System (INIS)

    Mangia, C.; Rizza, U.; Tirabassi, T.

    1998-01-01

    A non-reactive air pollution model evaluating ground level concentration is presented. It relies on a new Gaussian formulation (Lupini, R. and Tirabassi, T., J. Appl. Meteor., 20 (1981) 565-570; Tirabassi, T. and Rizza, U., Atmos. Environ., 28 (1994) 611-615) for transport and vertical diffusion in the Atmospheric Boundary Layer (ABL). In this formulation, the source height is replaced by a virtual height expressed by simple functions of meteorological variables. The model accepts a general profile of wind u(z) and eddy diffusivity coefficient K z . The lateral dispersion coefficient is based on Taylor's theory (Taylor, G. I., Proc. London Math. Soc., 20 (1921) 196-204). The turbulence in the ABL is subdivided into various regimes, each characterized by different parameters for length and velocity scales. The model performances under unstable conditions have been tested utilizing two different data sets

  17. Coherence of the vortex Bessel-Gaussian beam in turbulent atmosphere

    Science.gov (United States)

    Lukin, Igor P.

    2017-11-01

    In this paper the theoretical research of coherent properties of the vortex Bessel-Gaussian optical beams propagating in turbulent atmosphere are developed. The approach to the analysis of this problem is based on the analytical solution of the equation for the transverse second-order mutual coherence function of a field of optical radiation. The behavior of integral scale of coherence degree of vortex Bessel-Gaussian optical beams depending on parameters of an optical beam and characteristics of turbulent atmosphere is particularly considered. It is shown that the integral scale of coherence degree of a vortex Bessel-Gaussian optical beam essentially depends on value of a topological charge of a vortex optical beam. With increase in a topological charge of a vortex Bessel-Gaussian optical beam the value of integral scale of coherence degree of a vortex Bessel-Gaussian optical beam are decreased.

  18. Fitting the Fractional Polynomial Model to Non-Gaussian Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Ji Hoon Ryoo

    2017-08-01

    Full Text Available As in cross sectional studies, longitudinal studies involve non-Gaussian data such as binomial, Poisson, gamma, and inverse-Gaussian distributions, and multivariate exponential families. A number of statistical tools have thus been developed to deal with non-Gaussian longitudinal data, including analytic techniques to estimate parameters in both fixed and random effects models. However, as yet growth modeling with non-Gaussian data is somewhat limited when considering the transformed expectation of the response via a linear predictor as a functional form of explanatory variables. In this study, we introduce a fractional polynomial model (FPM that can be applied to model non-linear growth with non-Gaussian longitudinal data and demonstrate its use by fitting two empirical binary and count data models. The results clearly show the efficiency and flexibility of the FPM for such applications.

  19. GaussianCpG: a Gaussian model for detection of CpG island in human genome sequences.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Zelikovsky, Alexander; Pan, Yi

    2017-05-24

    As crucial markers in identifying biological elements and processes in mammalian genomes, CpG islands (CGI) play important roles in DNA methylation, gene regulation, epigenetic inheritance, gene mutation, chromosome inactivation and nuclesome retention. The generally accepted criteria of CGI rely on: (a) %G+C content is ≥ 50%, (b) the ratio of the observed CpG content and the expected CpG content is ≥ 0.6, and (c) the general length of CGI is greater than 200 nucleotides. Most existing computational methods for the prediction of CpG island are programmed on these rules. However, many experimentally verified CpG islands deviate from these artificial criteria. Experiments indicate that in many cases %G+C is human genome. We analyze the energy distribution over genomic primary structure for each CpG site and adopt the parameters from statistics of Human genome. The evaluation results show that the new model can predict CpG islands efficiently by balancing both sensitivity and specificity over known human CGI data sets. Compared with other models, GaussianCpG can achieve better performance in CGI detection. Our Gaussian model aims to simplify the complex interaction between nucleotides. The model is computed not by the linear statistical method but by the Gaussian energy distribution and accumulation. The parameters of Gaussian function are not arbitrarily designated but deliberately chosen by optimizing the biological statistics. By using the pseudopotential analysis on CpG islands, the novel model is validated on both the real and artificial data sets.

  20. Transient Properties of a Bistable System with Delay Time Driven by Non-Gaussian and Gaussian Noises: Mean First-Passage Time

    International Nuclear Information System (INIS)

    Li Dongxi; Xu Wei; Guo Yongfeng; Li Gaojie

    2008-01-01

    The mean first-passage time of a bistable system with time-delayed feedback driven by multiplicative non-Gaussian noise and additive Gaussian white noise is investigated. Firstly, the non-Markov process is reduced to the Markov process through a path-integral approach; Secondly, the approximate Fokker-Planck equation is obtained by applying the unified coloured noise approximation, the small time delay approximation and the Novikov Theorem. The functional analysis and simplification are employed to obtain the approximate expressions of MFPT. The effects of non-Gaussian parameter (measures deviation from Gaussian character) r, the delay time τ, the noise correlation time τ 0 , the intensities D and α of noise on the MFPT are discussed. It is found that the escape time could be reduced by increasing the delay time τ, the noise correlation time τ 0 , or by reducing the intensities D and α. As far as we know, this is the first time to consider the effect of delay time on the mean first-passage time in the stochastic dynamical system

  1. An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension

    Directory of Open Access Journals (Sweden)

    Yosra Marnissi

    2018-02-01

    Full Text Available In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous

  2. mathematical modelling of atmospheric dispersion of pollutants

    International Nuclear Information System (INIS)

    Mohamed, M.E.

    2002-01-01

    the main objectives of this thesis are dealing with environmental problems adopting mathematical techniques. in this respect, atmospheric dispersion processes have been investigated by improving the analytical models to realize the realistic physical phenomena. to achieve these aims, the skeleton of this work contained both mathematical and environmental topics,performed in six chapters. in chapter one we presented a comprehensive review study of most important informations related to our work such as thermal stability , plume rise, inversion, advection , dispersion of pollutants, gaussian plume models dealing with both radioactive and industrial contaminants. chapter two deals with estimating the decay distance as well as the decay time of either industrial or radioactive airborne pollutant. further, highly turbulent atmosphere has been investigated as a special case in the three main thermal stability classes namely, neutral, stable, and unstable atmosphere. chapter three is concerned with obtaining maximum ground level concentration of air pollutant. the variable effective height of pollutants has been considered throughout the mathematical treatment. as a special case the constancy of effective height has been derived mathematically and the maximum ground level concentration as well as its location have been established

  3. Design of elliptic curve cryptoprocessors over GF(2^163 using the Gaussian normal basis

    Directory of Open Access Journals (Sweden)

    Paulo Cesar Realpe

    2014-05-01

    Full Text Available This paper presents the efficient hardware implementation of cryptoprocessors that carry out the scalar multiplication kP over finite field GF(2163 using two digit-level multipliers. The finite field arithmetic operations were implemented using Gaussian normal basis (GNB representation, and the scalar multiplication kP was implemented using Lopez-Dahab algorithm, 2-NAF halve-and-add algorithm and w-tNAF method for Koblitz curves. The processors were designed using VHDL description, synthesized on the Stratix-IV FPGA using Quartus II 12.0 and verified using SignalTAP II and Matlab. The simulation results show that the cryptoprocessors present a very good performance to carry out the scalar multiplication kP. In this case, the computation times of the multiplication kP using Lopez-Dahab, 2-NAF halve-and-add and 16-tNAF for Koblitz curves were 13.37 µs, 16.90 µs and 5.05 µs, respectively.

  4. Feasibility study on the least square method for fitting non-Gaussian noise data

    Science.gov (United States)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  5. The study of atmospheric dispersion of radionuclide near nuclear power plant using CFD approach

    International Nuclear Information System (INIS)

    Nagrale, Dhanesh B.; Bera, Subrata; Deo, Anuj K.; Gaikwad, Avinash J.

    2015-01-01

    Most of the studies on atmospheric dispersion of radioactive material released from nuclear power plants are based on Gaussian plume models which fail to take account turbulence generated. The Fire Dynamic Simulator (FDS) code is one such flow model that uses a form of Navier-Stokes equation for low mach number applications. In the 0-2 km range near nuclear power plant, mainly near the source of emission of radionuclides, obstructions like natural draft cooling towers, plant building and structures are located. The stability class 'F' conditions and temperature of surrounding atmosphere, 15°C are considered in analysis. Main constituents of radionuclides released from stack mainly xenon, krypton. Two cases are carried out a) dispersion of gases without obstruction of cooling tower and b) dispersion of gases with obstruction of cooling tower. It is observed that mass fraction of radionuclides near the cooling tower ground increased to certain extent due to obstruction and wake effect. (author)

  6. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    Science.gov (United States)

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  7. Coincidence Imaging and interference with coherent Gaussian beams

    Institute of Scientific and Technical Information of China (English)

    CAI Yang-jian; ZHU Shi-yao

    2006-01-01

    we present a theoretical study of coincidence imaging and interference with coherent Gaussian beams The equations for the coincidence image formation and interference fringes are derived,from which it is clear that the imaging is due to the corresponding focusing in the two paths .The quality and visibility of the images and fringes can be high simultaneously.The nature of the coincidence imaging and interference between quantum entangled photon pairs and coherent Gaussian beams are different .The coincidence image with coherent Gaussian beams is due to intensity-intensity correspondence,a classical nature,while that with entangled photon pairs is due to the amplitude correlation a quantum nature.

  8. Marine Radioactivity Studies in the Suez Canal, Part II: Field Experiments and a Modelling Study of Dispersion

    Science.gov (United States)

    Abril, J. M.; Abdel-Aal, M. M.; Al-Gamal, S. A.; Abdel-Hay, F. A.; Zahar, H. M.

    2000-04-01

    In this paper we take advantage of the two field tracing experiments carried out under the IAEA project EGY/07/002, to develop a modelling study on the dispersion of radioactive pollution in the Suez Canal. The experiments were accomplished by using rhodamine B as a tracer, and water samples were measured by luminescence spectrometry. The presence of natural luminescent particles in the canal waters limited the use of some field data. During experiments, water levels, velocities, wind and other physical parameters were recorded to supply appropriate information for the modelling work. From this data set, the hydrodynamics of the studied area has been reasonably described. We apply a 1-D-Gaussian and 2-D modelling approaches to predict the position and the spatial shape of the plume. The use of different formulations for dispersion coefficients is studied. These dispersion coefficients are then applied in a 2-D-hydrodynamic and dispersion model for the Bitter Lake to investigate different scenarios of accidental discharges.

  9. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  10. Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case

    Directory of Open Access Journals (Sweden)

    Królak Andrzej

    2005-03-01

    Full Text Available The article reviews the statistical theory of signal detection in application to analysis of deterministic gravitational-wave signals in the noise of a detector. Statistical foundations for the theory of signal detection and parameter estimation are presented. Several tools needed for both theoretical evaluation of the optimal data analysis methods and for their practical implementation are introduced. They include optimal signal-to-noise ratio, Fisher matrix, false alarm and detection probabilities, F-statistic, template placement, and fitting factor. These tools apply to the case of signals buried in a stationary and Gaussian noise. Algorithms to efficiently implement the optimal data analysis techniques are discussed. Formulas are given for a general gravitational-wave signal that includes as special cases most of the deterministic signals of interest.

  11. Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case

    Directory of Open Access Journals (Sweden)

    Piotr Jaranowski

    2012-03-01

    Full Text Available The article reviews the statistical theory of signal detection in application to analysis of deterministic gravitational-wave signals in the noise of a detector. Statistical foundations for the theory of signal detection and parameter estimation are presented. Several tools needed for both theoretical evaluation of the optimal data analysis methods and for their practical implementation are introduced. They include optimal signal-to-noise ratio, Fisher matrix, false alarm and detection probabilities, ℱ-statistic, template placement, and fitting factor. These tools apply to the case of signals buried in a stationary and Gaussian noise. Algorithms to efficiently implement the optimal data analysis techniques are discussed. Formulas are given for a general gravitational-wave signal that includes as special cases most of the deterministic signals of interest.

  12. EDITORIAL: Non-linear and non-Gaussian cosmological perturbations Non-linear and non-Gaussian cosmological perturbations

    Science.gov (United States)

    Sasaki, Misao; Wands, David

    2010-06-01

    In recent years there has been a resurgence of interest in the study of non-linear perturbations of cosmological models. This has been the result of both theoretical developments and observational advances. New theoretical challenges arise at second and higher order due to mode coupling and the need to develop new gauge-invariant variables beyond first order. In particular, non-linear interactions lead to deviations from a Gaussian distribution of primordial perturbations even if initial vacuum fluctuations are exactly Gaussian. These non-Gaussianities provide an important probe of models for the origin of structure in the very early universe. We now have a detailed picture of the primordial distribution of matter from surveys of the cosmic microwave background, notably NASA's WMAP satellite. The situation will continue to improve with future data from the ESA Planck satellite launched in 2009. To fully exploit these data cosmologists need to extend non-linear cosmological perturbation theory beyond the linear theory that has previously been sufficient on cosmological scales. Another recent development has been the realization that large-scale structure, revealed in high-redshift galaxy surveys, could also be sensitive to non-linearities in the primordial curvature perturbation. This focus section brings together a collection of invited papers which explore several topical issues in this subject. We hope it will be of interest to theoretical physicists and astrophysicists alike interested in understanding and interpreting recent developments in cosmological perturbation theory and models of the early universe. Of course it is only an incomplete snapshot of a rapidly developing field and we hope the reader will be inspired to read further work on the subject and, perhaps, fill in some of the missing pieces. This focus section is dedicated to the memory of Lev Kofman (1957-2009), an enthusiastic pioneer of inflationary cosmology and non-Gaussian perturbations.

  13. Development of simulators algorithms of planar radioactive sources for use in computer models of exposure

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade

    2013-01-01

    This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND

  14. An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

    Directory of Open Access Journals (Sweden)

    Shou Yu-Wen

    2010-01-01

    Full Text Available We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%~10% for our three tested videos in the experimental results of vehicle counting.

  15. Integral momenta of vortex Bessel-Gaussian beams in turbulent atmosphere.

    Science.gov (United States)

    Lukin, Igor P

    2016-04-20

    The orbital angular momentum of vortex Bessel-Gaussian beams propagating in turbulent atmosphere is studied theoretically. The field of an optical beam is determined through the solution of the paraxial wave equation for a randomly inhomogeneous medium with fluctuations of the refraction index of the turbulent atmosphere. Peculiarities in the behavior of the total power of the vortex Bessel-Gaussian beam at the receiver (or transmitter) are examined. The dependence of the total power of the vortex Bessel-Gaussian beam on optical beam parameters, namely, the transverse wave number of optical radiation, amplitude factor radius, and, especially, topological charge of the optical beam, is analyzed in detail. It turns out that the mean value of the orbital angular momentum of the vortex Bessel-Gaussian beam remains constant during propagation in the turbulent atmosphere. It is shown that the variance of fluctuations of the orbital angular momentum of the vortex Bessel-Gaussian beam propagating in turbulent atmosphere calculated with the "mean-intensity" approximation is equal to zero identically. Thus, it is possible to declare confidently that the variance of fluctuations of the orbital angular momentum of the vortex Bessel-Gaussian beam in turbulent atmosphere is not very large.

  16. Selection of individual features of a speech signal using genetic algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Kamiński

    2016-03-01

    Full Text Available The paper presents an automatic speaker’s recognition system, implemented in the Matlab environment, and demonstrates how to achieve and optimize various elements of the system. The main emphasis was put on features selection of a speech signal using a genetic algorithm which takes into account synergy of features. The results of optimization of selected elements of a classifier have been also shown, including the number of Gaussian distributions used to model each of the voices. In addition, for creating voice models, a universal voice model has been used.[b]Keywords[/b]: biometrics, automatic speaker recognition, genetic algorithms, feature selection

  17. Assessment of impact distances for particulate matter dispersion: A stochastic approach

    Energy Technology Data Exchange (ETDEWEB)

    Godoy, S.M.; Mores, P.L.; Santa Cruz, A.S.M. [CAIMI - Centro de Aplicaciones Informaticas y Modelado en Ingenieria, Universidad Tecnologica Nacional-Facultad Regional Rosario, Zeballos 1341-S2000 BQA Rosario, Santa Fe (Argentina); Scenna, N.J. [CAIMI - Centro de Aplicaciones Informaticas y Modelado en Ingenieria, Universidad Tecnologica Nacional-Facultad Regional Rosario, Zeballos 1341-S2000 BQA Rosario, Santa Fe (Argentina); INGAR - Instituto de Desarrollo y Diseno (Fundacion ARCIEN - CONICET), Avellaneda 3657, S3002 GJC Santa Fe (Argentina)], E-mail: nscenna@santafe-conicet.gov.ar

    2009-10-15

    It is known that pollutants can be dispersed from the emission sources by the wind, or settled on the ground. Particle size, stack height, topography and meteorological conditions strongly affect particulate matter (PM) dispersion. In this work, an impact distance calculation methodology considering different particulate sizes is presented. A Gaussian-type dispersion model for PM that handles size particles larger than 0.1 {mu}m is used. The model considers primary particles and continuous emissions. PM concentration distribution at every affected geographical point defined by a grid is computed. Stochastic uncertainty caused by the natural variability of atmospheric parameters is taken into consideration in the dispersion model by applying a Monte Carlo methodology. The prototype package (STRRAP) that takes into account the stochastic behaviour of atmospheric variables, developed for risk assessment and safe distances calculation [Godoy SM, Santa Cruz ASM, Scenna NJ. STRRAP SYSTEM - A software for hazardous materials risk assessment and safe distances calculation. Reliability Engineering and System Safety 2007;92(7):847-57] is enlarged for the analysis of the PM air dispersion. STRRAP computes distances from the source to every affected receptor in each trial and generates the impact distance distribution for each particulate size. In addition, a representative impact distance value to delimit the affected area can be obtained. Fuel oil stack effluents dispersion in Rosario city is simulated as a case study. Mass concentration distributions and impact distances are computed for the range of interest in environmental air quality evaluations (PM{sub 2.5}-PM{sub 10})

  18. Assessment of impact distances for particulate matter dispersion: A stochastic approach

    International Nuclear Information System (INIS)

    Godoy, S.M.; Mores, P.L.; Santa Cruz, A.S.M.; Scenna, N.J.

    2009-01-01

    It is known that pollutants can be dispersed from the emission sources by the wind, or settled on the ground. Particle size, stack height, topography and meteorological conditions strongly affect particulate matter (PM) dispersion. In this work, an impact distance calculation methodology considering different particulate sizes is presented. A Gaussian-type dispersion model for PM that handles size particles larger than 0.1 μm is used. The model considers primary particles and continuous emissions. PM concentration distribution at every affected geographical point defined by a grid is computed. Stochastic uncertainty caused by the natural variability of atmospheric parameters is taken into consideration in the dispersion model by applying a Monte Carlo methodology. The prototype package (STRRAP) that takes into account the stochastic behaviour of atmospheric variables, developed for risk assessment and safe distances calculation [Godoy SM, Santa Cruz ASM, Scenna NJ. STRRAP SYSTEM - A software for hazardous materials risk assessment and safe distances calculation. Reliability Engineering and System Safety 2007;92(7):847-57] is enlarged for the analysis of the PM air dispersion. STRRAP computes distances from the source to every affected receptor in each trial and generates the impact distance distribution for each particulate size. In addition, a representative impact distance value to delimit the affected area can be obtained. Fuel oil stack effluents dispersion in Rosario city is simulated as a case study. Mass concentration distributions and impact distances are computed for the range of interest in environmental air quality evaluations (PM 2.5 -PM 10 ).

  19. Ultrawide Bandwidth Receiver Based on a Multivariate Generalized Gaussian Distribution

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-04-01

    Multivariate generalized Gaussian density (MGGD) is used to approximate the multiple access interference (MAI) and additive white Gaussian noise in pulse-based ultrawide bandwidth (UWB) system. The MGGD probability density function (pdf) is shown to be a better approximation of a UWB system as compared to multivariate Gaussian, multivariate Laplacian and multivariate Gaussian-Laplacian mixture (GLM). The similarity between the simulated and the approximated pdf is measured with the help of modified Kullback-Leibler distance (KLD). It is also shown that MGGD has the smallest KLD as compared to Gaussian, Laplacian and GLM densities. A receiver based on the principles of minimum bit error rate is designed for the MGGD pdf. As the requirement is stringent, the adaptive implementation of the receiver is also carried out in this paper. Training sequence of the desired user is the only requirement when implementing the detector adaptively. © 2002-2012 IEEE.

  20. Tunable output-frequency filter algorithm for imaging through scattering media under LED illumination

    Science.gov (United States)

    Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli

    2018-03-01

    We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.