WorldWideScience

Sample records for regression gaussian process

  1. Bounded Gaussian process regression

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

    2013-01-01

    We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

  2. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  3. Gaussian process regression for geometry optimization

    Science.gov (United States)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  4. Gaussian process regression for tool wear prediction

    Science.gov (United States)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  5. Gaussian Process Regression Model in Spatial Logistic Regression

    Science.gov (United States)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  6. Gaussian Process Regression for WDM System Performance Prediction

    DEFF Research Database (Denmark)

    Wass, Jesper; Thrane, Jakob; Piels, Molly

    2017-01-01

    Gaussian process regression is numerically and experimentally investigated to predict the bit error rate of a 24 x 28 CiBd QPSK WDM system. The proposed method produces accurate predictions from multi-dimensional and sparse measurement data.......Gaussian process regression is numerically and experimentally investigated to predict the bit error rate of a 24 x 28 CiBd QPSK WDM system. The proposed method produces accurate predictions from multi-dimensional and sparse measurement data....

  7. Robust Gaussian Process Regression with a Student-t Likelihood

    NARCIS (Netherlands)

    Jylänki, P.P.; Vanhatalo, J.; Vehtari, A.

    2011-01-01

    This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have

  8. Analysis of some methods for reduced rank Gaussian process regression

    DEFF Research Database (Denmark)

    Quinonero-Candela, J.; Rasmussen, Carl Edward

    2005-01-01

    While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent...... proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank...... Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning...

  9. Solving Dynamic Traveling Salesman Problem Using Dynamic Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Stephen M. Akandwanaho

    2014-01-01

    Full Text Available This paper solves the dynamic traveling salesman problem (DTSP using dynamic Gaussian Process Regression (DGPR method. The problem of varying correlation tour is alleviated by the nonstationary covariance function interleaved with DGPR to generate a predictive distribution for DTSP tour. This approach is conjoined with Nearest Neighbor (NN method and the iterated local search to track dynamic optima. Experimental results were obtained on DTSP instances. The comparisons were performed with Genetic Algorithm and Simulated Annealing. The proposed approach demonstrates superiority in finding good traveling salesman problem (TSP tour and less computational time in nonstationary conditions.

  10. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  11. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  12. Bayesian site selection for fast Gaussian process regression

    KAUST Repository

    Pourhabib, Arash; Liang, Faming; Ding, Yu

    2014-01-01

    Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. © 2014 Taylor and Francis Group, LLC.

  13. Bayesian site selection for fast Gaussian process regression

    KAUST Repository

    Pourhabib, Arash

    2014-02-05

    Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. © 2014 Taylor and Francis Group, LLC.

  14. Gaussian process regression for forecasting battery state of health

    Science.gov (United States)

    Richardson, Robert R.; Osborne, Michael A.; Howey, David A.

    2017-07-01

    Accurately predicting the future capacity and remaining useful life of batteries is necessary to ensure reliable system operation and to minimise maintenance costs. The complex nature of battery degradation has meant that mechanistic modelling of capacity fade has thus far remained intractable; however, with the advent of cloud-connected devices, data from cells in various applications is becoming increasingly available, and the feasibility of data-driven methods for battery prognostics is increasing. Here we propose Gaussian process (GP) regression for forecasting battery state of health, and highlight various advantages of GPs over other data-driven and mechanistic approaches. GPs are a type of Bayesian non-parametric method, and hence can model complex systems whilst handling uncertainty in a principled manner. Prior information can be exploited by GPs in a variety of ways: explicit mean functions can be used if the functional form of the underlying degradation model is available, and multiple-output GPs can effectively exploit correlations between data from different cells. We demonstrate the predictive capability of GPs for short-term and long-term (remaining useful life) forecasting on a selection of capacity vs. cycle datasets from lithium-ion cells.

  15. Gaussian process regression for sensor networks under localization uncertainty

    Science.gov (United States)

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  16. Computed tomography perfusion imaging denoising using Gaussian process regression

    International Nuclear Information System (INIS)

    Zhu Fan; Gonzalez, David Rodriguez; Atkinson, Malcolm; Carpenter, Trevor; Wardlaw, Joanna

    2012-01-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. (note)

  17. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    Science.gov (United States)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  18. Statistical 21-cm Signal Separation via Gaussian Process Regression Analysis

    Science.gov (United States)

    Mertens, F. G.; Ghosh, A.; Koopmans, L. V. E.

    2018-05-01

    Detecting and characterizing the Epoch of Reionization and Cosmic Dawn via the redshifted 21-cm hyperfine line of neutral hydrogen will revolutionize the study of the formation of the first stars, galaxies, black holes and intergalactic gas in the infant Universe. The wealth of information encoded in this signal is, however, buried under foregrounds that are many orders of magnitude brighter. These must be removed accurately and precisely in order to reveal the feeble 21-cm signal. This requires not only the modeling of the Galactic and extra-galactic emission, but also of the often stochastic residuals due to imperfect calibration of the data caused by ionospheric and instrumental distortions. To stochastically model these effects, we introduce a new method based on `Gaussian Process Regression' (GPR) which is able to statistically separate the 21-cm signal from most of the foregrounds and other contaminants. Using simulated LOFAR-EoR data that include strong instrumental mode-mixing, we show that this method is capable of recovering the 21-cm signal power spectrum across the entire range k = 0.07 - 0.3 {h cMpc^{-1}}. The GPR method is most optimal, having minimal and controllable impact on the 21-cm signal, when the foregrounds are correlated on frequency scales ≳ 3 MHz and the rms of the signal has σ21cm ≳ 0.1 σnoise. This signal separation improves the 21-cm power-spectrum sensitivity by a factor ≳ 3 compared to foreground avoidance strategies and enables the sensitivity of current and future 21-cm instruments such as the Square Kilometre Array to be fully exploited.

  19. Multi-fidelity Gaussian process regression for computer experiments

    International Nuclear Information System (INIS)

    Le-Gratiet, Loic

    2013-01-01

    This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (i.e. the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based meta-models with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (i.e. when the process is in fact finite- dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework. (author) [fr

  20. A Gaussian mixture copula model based localized Gaussian process regression approach for long-term wind speed prediction

    International Nuclear Information System (INIS)

    Yu, Jie; Chen, Kuilin; Mori, Junichi; Rashid, Mudassir M.

    2013-01-01

    Optimizing wind power generation and controlling the operation of wind turbines to efficiently harness the renewable wind energy is a challenging task due to the intermittency and unpredictable nature of wind speed, which has significant influence on wind power production. A new approach for long-term wind speed forecasting is developed in this study by integrating GMCM (Gaussian mixture copula model) and localized GPR (Gaussian process regression). The time series of wind speed is first classified into multiple non-Gaussian components through the Gaussian mixture copula model and then Bayesian inference strategy is employed to incorporate the various non-Gaussian components using the posterior probabilities. Further, the localized Gaussian process regression models corresponding to different non-Gaussian components are built to characterize the stochastic uncertainty and non-stationary seasonality of the wind speed data. The various localized GPR models are integrated through the posterior probabilities as the weightings so that a global predictive model is developed for the prediction of wind speed. The proposed GMCM–GPR approach is demonstrated using wind speed data from various wind farm locations and compared against the GMCM-based ARIMA (auto-regressive integrated moving average) and SVR (support vector regression) methods. In contrast to GMCM–ARIMA and GMCM–SVR methods, the proposed GMCM–GPR model is able to well characterize the multi-seasonality and uncertainty of wind speed series for accurate long-term prediction. - Highlights: • A novel predictive modeling method is proposed for long-term wind speed forecasting. • Gaussian mixture copula model is estimated to characterize the multi-seasonality. • Localized Gaussian process regression models can deal with the random uncertainty. • Multiple GPR models are integrated through Bayesian inference strategy. • The proposed approach shows higher prediction accuracy and reliability

  1. Sparse Inverse Gaussian Process Regression with Application to Climate Network Discovery

    Data.gov (United States)

    National Aeronautics and Space Administration — Regression problems on massive data sets are ubiquitous in many application domains including the Internet, earth and space sciences, and finances. Gaussian Process...

  2. Block-GP: Scalable Gaussian Process Regression for Multimodal Data

    Data.gov (United States)

    National Aeronautics and Space Administration — Regression problems on massive data sets are ubiquitous in many application domains including the Internet, earth and space sciences, and finances. In many cases,...

  3. Soft Sensor Modeling Based on Multiple Gaussian Process Regression and Fuzzy C-mean Clustering

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available In order to overcome the difficulties of online measurement of some crucial biochemical variables in fermentation processes, a new soft sensor modeling method is presented based on the Gaussian process regression and fuzzy C-mean clustering. With the consideration that the typical fermentation process can be distributed into 4 phases including lag phase, exponential growth phase, stable phase and dead phase, the training samples are classified into 4 subcategories by using fuzzy C- mean clustering algorithm. For each sub-category, the samples are trained using the Gaussian process regression and the corresponding soft-sensing sub-model is established respectively. For a new sample, the membership between this sample and sub-models are computed based on the Euclidean distance, and then the prediction output of soft sensor is obtained using the weighting sum. Taking the Lysine fermentation as example, the simulation and experiment are carried out and the corresponding results show that the presented method achieves better fitting and generalization ability than radial basis function neutral network and single Gaussian process regression model.

  4. Regression Analysis for Multivariate Dependent Count Data Using Convolved Gaussian Processes

    OpenAIRE

    Sofro, A'yunin; Shi, Jian Qing; Cao, Chunzheng

    2017-01-01

    Research on Poisson regression analysis for dependent data has been developed rapidly in the last decade. One of difficult problems in a multivariate case is how to construct a cross-correlation structure and at the meantime make sure that the covariance matrix is positive definite. To address the issue, we propose to use convolved Gaussian process (CGP) in this paper. The approach provides a semi-parametric model and offers a natural framework for modeling common mean structure and covarianc...

  5. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    Science.gov (United States)

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  6. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  7. Continuous-variable quantum Gaussian process regression and quantum singular value decomposition of nonsparse low-rank matrices

    Science.gov (United States)

    Das, Siddhartha; Siopsis, George; Weedbrook, Christian

    2018-02-01

    With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.

  8. Multi-fidelity Gaussian process regression for prediction of random fields

    Energy Technology Data Exchange (ETDEWEB)

    Parussini, L. [Department of Engineering and Architecture, University of Trieste (Italy); Venturi, D., E-mail: venturi@ucsc.edu [Department of Applied Mathematics and Statistics, University of California Santa Cruz (United States); Perdikaris, P. [Department of Mechanical Engineering, Massachusetts Institute of Technology (United States); Karniadakis, G.E. [Division of Applied Mathematics, Brown University (United States)

    2017-05-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  9. Multi-fidelity Gaussian process regression for prediction of random fields

    International Nuclear Information System (INIS)

    Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G.E.

    2017-01-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  10. Exploration, Sampling, And Reconstruction of Free Energy Surfaces with Gaussian Process Regression.

    Science.gov (United States)

    Mones, Letif; Bernstein, Noam; Csányi, Gábor

    2016-10-11

    Practical free energy reconstruction algorithms involve three separate tasks: biasing, measuring some observable, and finally reconstructing the free energy surface from those measurements. In more than one dimension, adaptive schemes make it possible to explore only relatively low lying regions of the landscape by progressively building up the bias toward the negative of the free energy surface so that free energy barriers are eliminated. Most schemes use the final bias as their best estimate of the free energy surface. We show that large gains in computational efficiency, as measured by the reduction of time to solution, can be obtained by separating the bias used for dynamics from the final free energy reconstruction itself. We find that biasing with metadynamics, measuring a free energy gradient estimator, and reconstructing using Gaussian process regression can give an order of magnitude reduction in computational cost.

  11. Experimentally testing the dependence of momentum transport on second derivatives using Gaussian process regression

    Science.gov (United States)

    Chilenski, M. A.; Greenwald, M. J.; Hubbard, A. E.; Hughes, J. W.; Lee, J. P.; Marzouk, Y. M.; Rice, J. E.; White, A. E.

    2017-12-01

    It remains an open question to explain the dramatic change in intrinsic rotation induced by slight changes in electron density (White et al 2013 Phys. Plasmas 20 056106). One proposed explanation is that momentum transport is sensitive to the second derivatives of the temperature and density profiles (Lee et al 2015 Plasma Phys. Control. Fusion 57 125006), but it is widely considered to be impossible to measure these higher derivatives. In this paper, we show that it is possible to estimate second derivatives of electron density and temperature using a nonparametric regression technique known as Gaussian process regression. This technique avoids over-constraining the fit by not assuming an explicit functional form for the fitted curve. The uncertainties, obtained rigorously using Markov chain Monte Carlo sampling, are small enough that it is reasonable to explore hypotheses which depend on second derivatives. It is found that the differences in the second derivatives of n{e} and T{e} between the peaked and hollow rotation cases are rather small, suggesting that changes in the second derivatives are not likely to explain the experimental results.

  12. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    Science.gov (United States)

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  13. A Gaussian process regression based hybrid approach for short-term wind speed prediction

    International Nuclear Information System (INIS)

    Zhang, Chi; Wei, Haikun; Zhao, Xin; Liu, Tianhong; Zhang, Kanjian

    2016-01-01

    Highlights: • A novel hybrid approach is proposed for short-term wind speed prediction. • This method combines the parametric AR model with the non-parametric GPR model. • The relative importance of different inputs is considered. • Different types of covariance functions are considered and combined. • It can provide both accurate point forecasts and satisfactory prediction intervals. - Abstract: This paper proposes a hybrid model based on autoregressive (AR) model and Gaussian process regression (GPR) for probabilistic wind speed forecasting. In the proposed approach, the AR model is employed to capture the overall structure from wind speed series, and the GPR is adopted to extract the local structure. Additionally, automatic relevance determination (ARD) is used to take into account the relative importance of different inputs, and different types of covariance functions are combined to capture the characteristics of the data. The proposed hybrid model is compared with the persistence model, artificial neural network (ANN), and support vector machine (SVM) for one-step ahead forecasting, using wind speed data collected from three wind farms in China. The forecasting results indicate that the proposed method can not only improve point forecasts compared with other methods, but also generate satisfactory prediction intervals.

  14. tgp: An R Package for Bayesian Nonstationary, Semiparametric Nonlinear Regression and Design by Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2007-06-01

    Full Text Available The tgp package for R is a tool for fully Bayesian nonstationary, semiparametric nonlinear regression and design by treed Gaussian processes with jumps to the limiting linear model. Special cases also implemented include Bayesian linear models, linear CART, stationary separable and isotropic Gaussian processes. In addition to inference and posterior prediction, the package supports the (sequential design of experiments under these models paired with several objective criteria. 1-d and 2-d plotting, with higher dimension projection and slice capabilities, and tree drawing functions (requiring maptree and combinat packages, are also provided for visualization of tgp objects.

  15. Gaussian process regression based optimal design of combustion systems using flame images

    International Nuclear Information System (INIS)

    Chen, Junghui; Chan, Lester Lik Teck; Cheng, Yi-Cheng

    2013-01-01

    Highlights: • The digital color images of flames are applied to combustion design. • The combustion with modeling stochastic nature is developed using GP. • GP based uncertainty design is made and evaluated through a real combustion system. - Abstract: With the advanced methods of digital image processing and optical sensing, it is possible to have continuous imaging carried out on-line in combustion processes. In this paper, a method that extracts characteristics from the flame images is presented to immediately predict the outlet content of the flue gas. First, from the large number of flame image data, principal component analysis is used to discover the principal components or combinational variables, which describe the important trends and variations in the operation data. Then stochastic modeling of the combustion process is done by a Gaussian process with the aim to capture the stochastic nature of the flame associated with the oxygen content. The designed oxygen combustion content considers the uncertainty presented in the combustion. A reference image can be designed for the actual combustion process to provide an easy and straightforward maintenance of the combustion process

  16. Adaptive smoothing based on Gaussian processes regression increases the sensitivity and specificity of fMRI data.

    Science.gov (United States)

    Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z

    2017-03-01

    Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Gaussian Processes and Polynomial Chaos Expansion for Regression Problem: Linkage via the RKHS and Comparison via the KL Divergence

    Directory of Open Access Journals (Sweden)

    Liang Yan

    2018-03-01

    Full Text Available In this paper, we examine two widely-used approaches, the polynomial chaos expansion (PCE and Gaussian process (GP regression, for the development of surrogate models. The theoretical differences between the PCE and GP approximations are discussed. A state-of-the-art PCE approach is constructed based on high precision quadrature points; however, the need for truncation may result in potential precision loss; the GP approach performs well on small datasets and allows a fine and precise trade-off between fitting the data and smoothing, but its overall performance depends largely on the training dataset. The reproducing kernel Hilbert space (RKHS and Mercer’s theorem are introduced to form a linkage between the two methods. The theorem has proven that the two surrogates can be embedded in two isomorphic RKHS, by which we propose a novel method named Gaussian process on polynomial chaos basis (GPCB that incorporates the PCE and GP. A theoretical comparison is made between the PCE and GPCB with the help of the Kullback–Leibler divergence. We present that the GPCB is as stable and accurate as the PCE method. Furthermore, the GPCB is a one-step Bayesian method that chooses the best subset of RKHS in which the true function should lie, while the PCE method requires an adaptive procedure. Simulations of 1D and 2D benchmark functions show that GPCB outperforms both the PCE and classical GP methods. In order to solve high dimensional problems, a random sample scheme with a constructive design (i.e., tensor product of quadrature points is proposed to generate a valid training dataset for the GPCB method. This approach utilizes the nature of the high numerical accuracy underlying the quadrature points while ensuring the computational feasibility. Finally, the experimental results show that our sample strategy has a higher accuracy than classical experimental designs; meanwhile, it is suitable for solving high dimensional problems.

  18. Estimation of the daily global solar radiation based on the Gaussian process regression methodology in the Saharan climate

    Science.gov (United States)

    Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said

    2018-06-01

    Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.

  19. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    Science.gov (United States)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  20. Improved profile fitting and quantification of uncertainty in experimental measurements of impurity transport coefficients using Gaussian process regression

    International Nuclear Information System (INIS)

    Chilenski, M.A.; Greenwald, M.; Howard, N.T.; White, A.E.; Rice, J.E.; Walk, J.R.; Marzouk, Y.

    2015-01-01

    The need to fit smooth temperature and density profiles to discrete observations is ubiquitous in plasma physics, but the prevailing techniques for this have many shortcomings that cast doubt on the statistical validity of the results. This issue is amplified in the context of validation of gyrokinetic transport models (Holland et al 2009 Phys. Plasmas 16 052301), where the strong sensitivity of the code outputs to input gradients means that inadequacies in the profile fitting technique can easily lead to an incorrect assessment of the degree of agreement with experimental measurements. In order to rectify the shortcomings of standard approaches to profile fitting, we have applied Gaussian process regression (GPR), a powerful non-parametric regression technique, to analyse an Alcator C-Mod L-mode discharge used for past gyrokinetic validation work (Howard et al 2012 Nucl. Fusion 52 063002). We show that the GPR techniques can reproduce the previous results while delivering more statistically rigorous fits and uncertainty estimates for both the value and the gradient of plasma profiles with an improved level of automation. We also discuss how the use of GPR can allow for dramatic increases in the rate of convergence of uncertainty propagation for any code that takes experimental profiles as inputs. The new GPR techniques for profile fitting and uncertainty propagation are quite useful and general, and we describe the steps to implementation in detail in this paper. These techniques have the potential to substantially improve the quality of uncertainty estimates on profile fits and the rate of convergence of uncertainty propagation, making them of great interest for wider use in fusion experiments and modelling efforts. (paper)

  1. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR + , implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR + algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR + implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR + in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR + . The experimental results show that the EKF-GPR + algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR + reduces the patient-wise RMS error to 37%, 39% and 42

  2. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in

  3. H0 from cosmic chronometers and Type Ia supernovae, with Gaussian Processes and the novel Weighted Polynomial Regression method

    Science.gov (United States)

    Gómez-Valent, Adrià; Amendola, Luca

    2018-04-01

    In this paper we present new constraints on the Hubble parameter H0 using: (i) the available data on H(z) obtained from cosmic chronometers (CCH); (ii) the Hubble rate data points extracted from the supernovae of Type Ia (SnIa) of the Pantheon compilation and the Hubble Space Telescope (HST) CANDELS and CLASH Multy-Cycle Treasury (MCT) programs; and (iii) the local HST measurement of H0 provided by Riess et al. (2018), H0HST=(73.45±1.66) km/s/Mpc. Various determinations of H0 using the Gaussian processes (GPs) method and the most updated list of CCH data have been recently provided by Yu, Ratra & Wang (2018). Using the Gaussian kernel they find H0=(67.42± 4.75) km/s/Mpc. Here we extend their analysis to also include the most released and complete set of SnIa data, which allows us to reduce the uncertainty by a factor ~ 3 with respect to the result found by only considering the CCH information. We obtain H0=(67.06± 1.68) km/s/Mpc, which favors again the lower range of values for H0 and is in tension with H0HST. The tension reaches the 2.71σ level. We round off the GPs determination too by taking also into account the error propagation of the kernel hyperparameters when the CCH with and without H0HST are used in the analysis. In addition, we present a novel method to reconstruct functions from data, which consists in a weighted sum of polynomial regressions (WPR). We apply it from a cosmographic perspective to reconstruct H(z) and estimate H0 from CCH and SnIa measurements. The result obtained with this method, H0=(68.90± 1.96) km/s/Mpc, is fully compatible with the GPs ones. Finally, a more conservative GPs+WPR value is also provided, H0=(68.45± 2.00) km/s/Mpc, which is still almost 2σ away from H0HST.

  4. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression.

    Science.gov (United States)

    Bukhari, W; Hong, S-M

    2015-01-07

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR(+), implements a gating function without pre-specifying a particular region of the patient's breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR(+) algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR(+) implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR(+) in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR(+). The experimental results show that the EKF-GPR(+) algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR(+) reduces the patient-wise RMS error to 37%, 39% and

  5. Detecting periodicities with Gaussian processes

    Directory of Open Access Journals (Sweden)

    Nicolas Durrande

    2016-04-01

    Full Text Available We consider the problem of detecting and quantifying the periodic component of a function given noise-corrupted observations of a limited number of input/output tuples. Our approach is based on Gaussian process regression, which provides a flexible non-parametric framework for modelling periodic data. We introduce a novel decomposition of the covariance function as the sum of periodic and aperiodic kernels. This decomposition allows for the creation of sub-models which capture the periodic nature of the signal and its complement. To quantify the periodicity of the signal, we derive a periodicity ratio which reflects the uncertainty in the fitted sub-models. Although the method can be applied to many kernels, we give a special emphasis to the Matérn family, from the expression of the reproducing kernel Hilbert space inner product to the implementation of the associated periodic kernels in a Gaussian process toolkit. The proposed method is illustrated by considering the detection of periodically expressed genes in the arabidopsis genome.

  6. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus Plenge

    2017-01-01

    This paper establishes a remarkable result regarding Palm distributions for a log Gaussian Cox process: the reduced Palm distribution for a log Gaussian Cox process is itself a log Gaussian Cox process that only differs from the original log Gaussian Cox process in the intensity function. This new...... result is used to study functional summaries for log Gaussian Cox processes....

  7. Neural networks vs Gaussian process regression for representing potential energy surfaces: A comparative study of fit quality and vibrational spectrum accuracy

    Science.gov (United States)

    Kamath, Aditya; Vargas-Hernández, Rodrigo A.; Krems, Roman V.; Carrington, Tucker; Manzhos, Sergei

    2018-06-01

    For molecules with more than three atoms, it is difficult to fit or interpolate a potential energy surface (PES) from a small number of (usually ab initio) energies at points. Many methods have been proposed in recent decades, each claiming a set of advantages. Unfortunately, there are few comparative studies. In this paper, we compare neural networks (NNs) with Gaussian process (GP) regression. We re-fit an accurate PES of formaldehyde and compare PES errors on the entire point set used to solve the vibrational Schrödinger equation, i.e., the only error that matters in quantum dynamics calculations. We also compare the vibrational spectra computed on the underlying reference PES and the NN and GP potential surfaces. The NN and GP surfaces are constructed with exactly the same points, and the corresponding spectra are computed with the same points and the same basis. The GP fitting error is lower, and the GP spectrum is more accurate. The best NN fits to 625/1250/2500 symmetry unique potential energy points have global PES root mean square errors (RMSEs) of 6.53/2.54/0.86 cm-1, whereas the best GP surfaces have RMSE values of 3.87/1.13/0.62 cm-1, respectively. When fitting 625 symmetry unique points, the error in the first 100 vibrational levels is only 0.06 cm-1 with the best GP fit, whereas the spectrum on the best NN PES has an error of 0.22 cm-1, with respect to the spectrum computed on the reference PES. This error is reduced to about 0.01 cm-1 when fitting 2500 points with either the NN or GP. We also find that the GP surface produces a relatively accurate spectrum when obtained based on as few as 313 points.

  8. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  9. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2016-01-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN +  , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN + prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN + implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN +  . The experimental results show that the EKF-GPRN + algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN + algorithm can further reduce the prediction error by employing the gating function

  10. Determination of gaussian peaks in gamma spectra by iterative regression

    International Nuclear Information System (INIS)

    Nordemann, D.J.R.

    1987-05-01

    The parameters of the peaks in gamma-ray spectra are determined by a simple iterative regression method. For each peak, the parameters are associated with a gaussian curve (3 parameters) located above a linear continuum (2 parameters). This method may produces the complete result of the calculation of statistical uncertainties and an accuracy higher than others methods. (author) [pt

  11. Gaussian processes for machine learning.

    Science.gov (United States)

    Seeger, Matthias

    2004-04-01

    Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.

  12. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  13. Ridge Regression Signal Processing

    Science.gov (United States)

    Kuhl, Mark R.

    1990-01-01

    The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

  14. Gaussian Process-Mixture Conditional Heteroscedasticity.

    Science.gov (United States)

    Platanios, Emmanouil A; Chatzis, Sotirios P

    2014-05-01

    Generalized autoregressive conditional heteroscedasticity (GARCH) models have long been considered as one of the most successful families of approaches for volatility modeling in financial return series. In this paper, we propose an alternative approach based on methodologies widely used in the field of statistical machine learning. Specifically, we propose a novel nonparametric Bayesian mixture of Gaussian process regression models, each component of which models the noise variance process that contaminates the observed data as a separate latent Gaussian process driven by the observed data. This way, we essentially obtain a Gaussian process-mixture conditional heteroscedasticity (GPMCH) model for volatility modeling in financial return series. We impose a nonparametric prior with power-law nature over the distribution of the model mixture components, namely the Pitman-Yor process prior, to allow for better capturing modeled data distributions with heavy tails and skewness. Finally, we provide a copula-based approach for obtaining a predictive posterior for the covariances over the asset returns modeled by means of a postulated GPMCH model. We evaluate the efficacy of our approach in a number of benchmark scenarios, and compare its performance to state-of-the-art methodologies.

  15. Neutron inverse kinetics via Gaussian Processes

    International Nuclear Information System (INIS)

    Picca, Paolo; Furfaro, Roberto

    2012-01-01

    Highlights: ► A novel technique for the interpretation of experiments in ADS is presented. ► The technique is based on Bayesian regression, implemented via Gaussian Processes. ► GPs overcome the limits of classical methods, based on PK approximation. ► Results compares GPs and ANN performance, underlining similarities and differences. - Abstract: The paper introduces the application of Gaussian Processes (GPs) to determine the subcriticality level in accelerator-driven systems (ADSs) through the interpretation of pulsed experiment data. ADSs have peculiar kinetic properties due to their special core design. For this reason, classical – inversion techniques based on point kinetic (PK) generally fail to generate an accurate estimate of reactor subcriticality. Similarly to Artificial Neural Networks (ANNs), Gaussian Processes can be successfully trained to learn the underlying inverse neutron kinetic model and, as such, they are not limited to the model choice. Importantly, GPs are strongly rooted into the Bayes’ theorem which makes them a powerful tool for statistical inference. Here, GPs have been designed and trained on a set of kinetics models (e.g. point kinetics and multi-point kinetics) for homogeneous and heterogeneous settings. The results presented in the paper show that GPs are very efficient and accurate in predicting the reactivity for ADS-like systems. The variance computed via GPs may provide an indication on how to generate additional data as function of the desired accuracy.

  16. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  17. Scalable Gaussian Processes and the search for exoplanets

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Gaussian Processes are a class of non-parametric models that are often used to model stochastic behavior in time series or spatial data. A major limitation for the application of these models to large datasets is the computational cost. The cost of a single evaluation of the model likelihood scales as the third power of the number of data points. In the search for transiting exoplanets, the datasets of interest have tens of thousands to millions of measurements with uneven sampling, rendering naive application of a Gaussian Process model impractical. To attack this problem, we have developed robust approximate methods for Gaussian Process regression that can be applied at this scale. I will describe the general problem of Gaussian Process regression and offer several applicable use cases. Finally, I will present our work on scaling this model to the exciting field of exoplanet discovery and introduce a well-tested open source implementation of these new methods.

  18. Palm distributions for log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper; Waagepetersen, Rasmus

    This paper reviews useful results related to Palm distributions of spatial point processes and provides a new result regarding the characterization of Palm distributions for the class of log Gaussian Cox processes. This result is used to study functional summary statistics for a log Gaussian Cox...

  19. Gaussian processes and constructive scalar field theory

    International Nuclear Information System (INIS)

    Benfatto, G.; Nicolo, F.

    1981-01-01

    The last years have seen a very deep progress of constructive euclidean field theory, with many implications in the area of the random fields theory. The authors discuss an approach to super-renormalizable scalar field theories, which puts in particular evidence the connections with the theory of the Gaussian processes associated to the elliptic operators. The paper consists of two parts. Part I treats some problems in the theory of Gaussian processes which arise in the approach to the PHI 3 4 theory. Part II is devoted to the discussion of the ultraviolet stability in the PHI 3 4 theory. (Auth.)

  20. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  1. Continuous Disintegrations of Gaussian Processes

    OpenAIRE

    LaGatta, Tom

    2010-01-01

    The goal of this paper is to understand the conditional law of a stochastic process once it has been observed over an interval. To make this precise, we introduce the notion of a continuous disintegration: a regular conditional probability measure which varies continuously in the conditioned parameter. The conditioning is infinite-dimensional in character, which leads us to consider the general case of probability measures in Banach spaces. Our main result is that for a certain quantity $M$ b...

  2. Log Gaussian Cox processes on the sphere

    DEFF Research Database (Denmark)

    Pacheco, Francisco Andrés Cuevas; Møller, Jesper

    We define and study the existence of log Gaussian Cox processes (LGCPs) for the description of inhomogeneous and aggregated/clustered point patterns on the d-dimensional sphere, with d = 2 of primary interest. Useful theoretical properties of LGCPs are studied and applied for the description of sky...... positions of galaxies, in comparison with previous analysis using a Thomas process. We focus on simple estimation procedures and model checking based on functional summary statistics and the global envelope test....

  3. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  4. Large deviations for Gaussian processes in Hoelder norm

    International Nuclear Information System (INIS)

    Fatalov, V R

    2003-01-01

    Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields

  5. First Passage Time Intervals of Gaussian Processes

    Science.gov (United States)

    Perez, Hector; Kawabata, Tsutomu; Mimaki, Tadashi

    1987-08-01

    The first passage time problem of a stationary Guassian process is theretically and experimentally studied. Renewal functions are derived for a time-dependent boundary and numerically calculated for a Gaussian process having a seventh-order Butterworth spectrum. The results show a multipeak property not only for the constant boundary but also for a linearly increasing boundary. The first passage time distribution densities were experimentally determined for a constant boundary. The renewal functions were shown to be a fairly good approximation to the distribution density over a limited range.

  6. MCEM algorithm for the log-Gaussian Cox process

    OpenAIRE

    Delmas, Celine; Dubois-Peyrard, Nathalie; Sabbadin, Regis

    2014-01-01

    Log-Gaussian Cox processes are an important class of models for aggregated point patterns. They have been largely used in spatial epidemiology (Diggle et al., 2005), in agronomy (Bourgeois et al., 2012), in forestry (Moller et al.), in ecology (sightings of wild animals) or in environmental sciences (radioactivity counts). A log-Gaussian Cox process is a Poisson process with a stochastic intensity depending on a Gaussian random eld. We consider the case where this Gaussian random eld is ...

  7. Pseudo inputs for pairwise learning with Gaussian processes

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Larsen, Jan

    2012-01-01

    We consider learning and prediction of pairwise comparisons between instances. The problem is motivated from a perceptual view point, where pairwise comparisons serve as an effective and extensively used paradigm. A state-of-the-art method for modeling pairwise data in high dimensional domains...... is based on a classical pairwise probit likelihood imposed with a Gaussian process prior. While extremely flexible, this non-parametric method struggles with an inconvenient O(n3) scaling in terms of the n input instances which limits the method only to smaller problems. To overcome this, we derive...... to other similar approximations that have been applied in standard Gaussian process regression and classification problems such as FI(T)C and PI(T)C....

  8. Terrain Mapping and Obstacle Detection Using Gaussian Processes

    DEFF Research Database (Denmark)

    Kjærgaard, Morten; Massaro, Alessandro Salvatore; Bayramoglu, Enis

    2011-01-01

    In this paper we consider a probabilistic method for extracting terrain maps from a scene and use the information to detect potential navigation obstacles within it. The method uses Gaussian process regression (GPR) to predict an estimate function and its relative uncertainty. To test the new...... show that the estimated maps follow the terrain shape, while protrusions are identified and may be isolated as potential obstacles. Representing the data with a covariance function allows a dramatic reduction of the amount of data to process, while maintaining the statistical properties of the measured...... and interpolated features....

  9. Finite Range Decomposition of Gaussian Processes

    CERN Document Server

    Brydges, C D; Mitter, P K

    2003-01-01

    Let $D$ be the finite difference Laplacian associated to the lattice $bZ^{d}$. For dimension $dge 3$, $age 0$ and $L$ a sufficiently large positive dyadic integer, we prove that the integral kernel of the resolvent $G^{a}:=(a-D)^{-1}$ can be decomposed as an infinite sum of positive semi-definite functions $ V_{n} $ of finite range, $ V_{n} (x-y) = 0$ for $|x-y|ge O(L)^{n}$. Equivalently, the Gaussian process on the lattice with covariance $G^{a}$ admits a decomposition into independent Gaussian processes with finite range covariances. For $a=0$, $ V_{n} $ has a limiting scaling form $L^{-n(d-2)}Gamma_{ c,ast }{bigl (frac{x-y}{ L^{n}}bigr )}$ as $nrightarrow infty$. As a corollary, such decompositions also exist for fractional powers $(-D)^{-alpha/2}$, $0

  10. Optimality of Poisson Processes Intensity Learning with Gaussian Processes

    NARCIS (Netherlands)

    Kirichenko, A.; van Zanten, H.

    2015-01-01

    In this paper we provide theoretical support for the so-called "Sigmoidal Gaussian Cox Process" approach to learning the intensity of an inhomogeneous Poisson process on a d-dimensional domain. This method was proposed by Adams, Murray and MacKay (ICML, 2009), who developed a tractable computational

  11. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  12. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  13. Learning non-Gaussian Time Series using the Box-Cox Gaussian Process

    OpenAIRE

    Rios, Gonzalo; Tobar, Felipe

    2018-01-01

    Gaussian processes (GPs) are Bayesian nonparametric generative models that provide interpretability of hyperparameters, admit closed-form expressions for training and inference, and are able to accurately represent uncertainty. To model general non-Gaussian data with complex correlation structure, GPs can be paired with an expressive covariance kernel and then fed into a nonlinear transformation (or warping). However, overparametrising the kernel and the warping is known to, respectively, hin...

  14. Modelling and control of dynamic systems using gaussian process models

    CERN Document Server

    Kocijan, Juš

    2016-01-01

    This monograph opens up new horizons for engineers and researchers in academia and in industry dealing with or interested in new developments in the field of system identification and control. It emphasizes guidelines for working solutions and practical advice for their implementation rather than the theoretical background of Gaussian process (GP) models. The book demonstrates the potential of this recent development in probabilistic machine-learning methods and gives the reader an intuitive understanding of the topic. The current state of the art is treated along with possible future directions for research. Systems control design relies on mathematical models and these may be developed from measurement data. This process of system identification, when based on GP models, can play an integral part of control design in data-based control and its description as such is an essential aspect of the text. The background of GP regression is introduced first with system identification and incorporation of prior know...

  15. Convergence of posteriors for discretized log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    2004-01-01

    In Markov chain Monte Carlo posterior computation for log Gaussian Cox processes (LGCPs) a discretization of the continuously indexed Gaussian field is required. It is demonstrated that approximate posterior expectations computed from discretized LGCPs converge to the exact posterior expectations...... when the cell sizes of the discretization tends to zero. The effect of discretization is studied in a data example....

  16. Efficient Blind System Identification of Non-Gaussian Auto-Regressive Models with HMM Modeling of the Excitation

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, Søren Vang

    2007-01-01

    We propose two blind system identification methods that exploit the underlying dynamics of non-Gaussian signals. The two signal models to be identified are: an Auto-Regressive (AR) model driven by a discrete-state Hidden Markov process, and the same model whose output is perturbed by white Gaussi...... outputs. The signal models are general and suitable to numerous important signals, such as speech signals and base-band communication signals. Applications to speech analysis and blind channel equalization are given to exemplify the efficiency of the new methods....

  17. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  18. Approximation problems with the divergence criterion for Gaussian variablesand Gaussian processes

    NARCIS (Netherlands)

    A.A. Stoorvogel; J.H. van Schuppen (Jan)

    1996-01-01

    textabstractSystem identification for stationary Gaussian processes includes an approximation problem. Currently the subspace algorithm for this problem enjoys much attention. This algorithm is based on a transformation of a finite time series to canonical variable form followed by a truncation.

  19. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  20. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  1. Poisson and Gaussian approximation of weighted local empirical processes

    NARCIS (Netherlands)

    Einmahl, J.H.J.

    1995-01-01

    We consider the local empirical process indexed by sets, a greatly generalized version of the well-studied uniform tail empirical process. We show that the weak limit of weighted versions of this process is Poisson under certain conditions, whereas it is Gaussian in other situations. Our main

  2. Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.

    Science.gov (United States)

    García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G

    2017-08-01

    The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.

  3. Standard sirens and dark sector with Gaussian process*

    Directory of Open Access Journals (Sweden)

    Cai Rong-Gen

    2018-01-01

    Full Text Available The gravitational waves from compact binary systems are viewed as a standard siren to probe the evolution of the universe. This paper summarizes the potential and ability to use the gravitational waves to constrain the cosmological parameters and the dark sector interaction in the Gaussian process methodology. After briefly introducing the method to reconstruct the dark sector interaction by the Gaussian process, the concept of standard sirens and the analysis of reconstructing the dark sector interaction with LISA are outlined. Furthermore, we estimate the constraint ability of the gravitational waves on cosmological parameters with ET. The numerical methods we use are Gaussian process and the Markov-Chain Monte-Carlo. Finally, we also forecast the improvements of the abilities to constrain the cosmological parameters with ET and LISA combined with the Planck.

  4. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  5. Representation and properties of a class of conditionally Gaussian processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Pedersen, Jan

    2009-01-01

    It is shown that the class of conditionally Gaussian processes with independent increments is stable under marginalisation and conditioning. Moreover, in general such processes can be represented as integrals of a time changed Brownian motion where the time change and the integrand are jointly in...

  6. Bipower variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  7. Bayesian analysis of log Gaussian Cox processes for disease mapping

    DEFF Research Database (Denmark)

    Benes, Viktor; Bodlák, Karel; Møller, Jesper

    We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence...... of the risk on the covariates. Instead of using the common area level approaches we consider a Bayesian analysis for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using markov chain Monte Carlo methods...

  8. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    Science.gov (United States)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  9. Neural pulse frequency modulation of an exponentially correlated Gaussian process

    Science.gov (United States)

    Hutchinson, C. E.; Chon, Y.-T.

    1976-01-01

    The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.

  10. PSSGP : Program for Simulation of Stationary Gaussian Processes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    This report describes the computer program PSSGP. PSSGP can be used to simulate realizations of stationary Gaussian stochastic processes. The simulation algorithm can be coupled with some applications. One possibility is to use PSSGP to estimate the first-passage density function of a given system...

  11. Distributed Prognostic Health Management with Gaussian Process Regression

    Science.gov (United States)

    Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank

    2010-01-01

    Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.

  12. TYPE Ia SUPERNOVA COLORS AND EJECTA VELOCITIES: HIERARCHICAL BAYESIAN REGRESSION WITH NON-GAUSSIAN DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Mandel, Kaisey S.; Kirshner, Robert P.; Foley, Ryan J.

    2014-01-01

    We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocity (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10 3 km s –1 ) –1 for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A V extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances

  13. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...

  14. Calculations of Sobol indices for the Gaussian process metamodel

    Energy Technology Data Exchange (ETDEWEB)

    Marrel, Amandine [CEA, DEN, DTN/SMTM/LMTE, F-13108 Saint Paul lez Durance (France)], E-mail: amandine.marrel@cea.fr; Iooss, Bertrand [CEA, DEN, DER/SESI/LCFR, F-13108 Saint Paul lez Durance (France); Laurent, Beatrice [Institut de Mathematiques, Universite de Toulouse (UMR 5219) (France); Roustant, Olivier [Ecole des Mines de Saint-Etienne (France)

    2009-03-15

    Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well-known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling.

  15. Calculations of Sobol indices for the Gaussian process metamodel

    International Nuclear Information System (INIS)

    Marrel, Amandine; Iooss, Bertrand; Laurent, Beatrice; Roustant, Olivier

    2009-01-01

    Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well-known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling

  16. State-Space Inference and Learning with Gaussian Processes

    OpenAIRE

    Turner, R; Deisenroth, MP; Rasmussen, CE

    2010-01-01

    18.10.13 KB. Ok to add author version to spiral, authors hold copyright. State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. C...

  17. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.; Ma, Y.; Sang, H.

    2011-01-01

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  18. Cautious NMPC with Gaussian Process Dynamics for Miniature Race Cars

    OpenAIRE

    Hewing, Lukas; Liniger, Alexander; Zeilinger, Melanie N.

    2017-01-01

    This paper presents an adaptive high performance control method for autonomous miniature race cars. Racing dynamics are notoriously hard to model from first principles, which is addressed by means of a cautious nonlinear model predictive control (NMPC) approach that learns to improve its dynamics model from data and safely increases racing performance. The approach makes use of a Gaussian Process (GP) and takes residual model uncertainty into account through a chance constrained formulation. ...

  19. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  20. Stochastic Analysis of Gaussian Processes via Fredholm Representation

    Directory of Open Access Journals (Sweden)

    Tommi Sottinen

    2016-01-01

    Full Text Available We show that every separable Gaussian process with integrable variance function admits a Fredholm representation with respect to a Brownian motion. We extend the Fredholm representation to a transfer principle and develop stochastic analysis by using it. We show the convenience of the Fredholm representation by giving applications to equivalence in law, bridges, series expansions, stochastic differential equations, and maximum likelihood estimations.

  1. ON REGRESSION REPRESENTATIONS OF STOCHASTIC-PROCESSES

    NARCIS (Netherlands)

    RUSCHENDORF, L; DEVALK, [No Value

    We construct a.s. nonlinear regression representations of general stochastic processes (X(n))n is-an-element-of N. As a consequence we obtain in particular special regression representations of Markov chains and of certain m-dependent sequences. For m-dependent sequences we obtain a constructive

  2. Blind signal processing algorithms under DC biased Gaussian noise

    Science.gov (United States)

    Kim, Namyong; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Distortions caused by the DC-biased laser input can be modeled as DC biased Gaussian noise and removing DC bias is important in the demodulation process of the electrical signal in most optical communications. In this paper, a new performance criterion and a related algorithm for unsupervised equalization are proposed for communication systems in the environment of channel distortions and DC biased Gaussian noise. The proposed criterion utilizes the Euclidean distance between the Dirac-delta function located at zero on the error axis and a probability density function of biased constant modulus errors, where constant modulus error is defined by the difference between the system out and a constant modulus calculated from the transmitted symbol points. From the results obtained from the simulation under channel models with fading and DC bias noise abruptly added to background Gaussian noise, the proposed algorithm converges rapidly even after the interruption of DC bias proving that the proposed criterion can be effectively applied to optical communication systems corrupted by channel distortions and DC bias noise.

  3. Leveraging Gaussian process approximations for rapid image overlay production

    CSIR Research Space (South Africa)

    Burke, Michael

    2017-10-01

    Full Text Available value, xs = argmax x∗ [ K (x∗, x∗) − K (x∗, x)K (x, x)−1K (x, x∗) ] . (10) Figure 2 illustrates this sampling strategy more clearly. This selec- tion process can be slow, but could be bootstrapped using Latin hypercube sampling [16]. 3 RESULTS Empirical... point - a 240 sample Gaussian process approximation takes roughly the same amount of time to compute as the full blanked overlay. GP 50 GP 100 GP 150 GP 200 GP 250 GP 300 GP 350 GP 400 Full Itti-Koch 0 2 4 6 8 10 Method R at in g Boxplot of storyboard...

  4. Fault Tolerant Control Using Gaussian Processes and Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Yang Xiaoke

    2015-03-01

    Full Text Available Essential ingredients for fault-tolerant control are the ability to represent system behaviour following the occurrence of a fault, and the ability to exploit this representation for deciding control actions. Gaussian processes seem to be very promising candidates for the first of these, and model predictive control has a proven capability for the second. We therefore propose to use the two together to obtain fault-tolerant control functionality. Our proposal is illustrated by several reasonably realistic examples drawn from flight control.

  5. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  6. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  7. Inferring probabilistic stellar rotation periods using Gaussian processes

    Science.gov (United States)

    Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh

    2018-02-01

    Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.

  8. Fast uncertainty reduction strategies relying on Gaussian process models

    International Nuclear Information System (INIS)

    Chevalier, Clement

    2013-01-01

    This work deals with sequential and batch-sequential evaluation strategies of real-valued functions under limited evaluation budget, using Gaussian process models. Optimal Stepwise Uncertainty Reduction (SUR) strategies are investigated for two different problems, motivated by real test cases in nuclear safety. First we consider the problem of identifying the excursion set above a given threshold T of a real-valued function f. Then we study the question of finding the set of 'safe controlled configurations', i.e. the set of controlled inputs where the function remains below T, whatever the value of some others non-controlled inputs. New SUR strategies are presented, together with efficient procedures and formulas to compute and use them in real world applications. The use of fast formulas to recalculate quickly the posterior mean or covariance function of a Gaussian process (referred to as the 'kriging update formulas') does not only provide substantial computational savings. It is also one of the key tools to derive closed form formulas enabling a practical use of computationally-intensive sampling strategies. A contribution in batch-sequential optimization (with the multi-points Expected Improvement) is also presented. (author)

  9. Field dose radiation determination by active learning with Gaussian Process for autonomous robot guiding

    International Nuclear Information System (INIS)

    Freitas Naiff, Danilo de; Silveira, Paulo R.; Pereira, Claudio M.N.A.

    2017-01-01

    This article proposes an approach for determination of radiation dose pro le in a radiation-susceptible environment, aiming to guide an autonomous robot in acting on those environments, reducing the human exposure to dangerous amount of dose. The approach consists of an active learning method based on information entropy reduction, using log-normally warped Gaussian Process (GP) as surrogate model, resulting in non-linear online regression with sequential measurements. Experiments with simulated radiation dose fields of varying complexity were made, and results showed that the approach was effective in reconstruct the eld with high accuracy, through relatively few measurements. The technique was also shown some robustness in presence measurement noise, present in real measurements, by assuming Gaussian noise. (author)

  10. Field dose radiation determination by active learning with Gaussian Process for autonomous robot guiding

    Energy Technology Data Exchange (ETDEWEB)

    Freitas Naiff, Danilo de; Silveira, Paulo R.; Pereira, Claudio M.N.A., E-mail: danilonai1992@poli.ufrj.br, E-mail: paulo@lmp.ufrj.br, E-mail: cmnap@ien.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    This article proposes an approach for determination of radiation dose pro le in a radiation-susceptible environment, aiming to guide an autonomous robot in acting on those environments, reducing the human exposure to dangerous amount of dose. The approach consists of an active learning method based on information entropy reduction, using log-normally warped Gaussian Process (GP) as surrogate model, resulting in non-linear online regression with sequential measurements. Experiments with simulated radiation dose fields of varying complexity were made, and results showed that the approach was effective in reconstruct the eld with high accuracy, through relatively few measurements. The technique was also shown some robustness in presence measurement noise, present in real measurements, by assuming Gaussian noise. (author)

  11. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  12. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  13. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  14. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  15. Gaussian-log-Gaussian wavelet trees, frequentist and Bayesian inference, and statistical signal processing applications

    DEFF Research Database (Denmark)

    Møller, Jesper; Jacobsen, Robert Dahl

    We introduce a promising alternative to the usual hidden Markov tree model for Gaussian wavelet coefficients, where their variances are specified by the hidden states and take values in a finite set. In our new model, the hidden states have a similar dependence structure but they are jointly Gaus...

  16. Frequentist and Bayesian inference for Gaussian-log-Gaussian wavelet trees and statistical signal processing applications

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Møller, Jesper

    2017-01-01

    We introduce new estimation methods for a subclass of the Gaussian scale mixture models for wavelet trees by Wainwright, Simoncelli and Willsky that rely on modern results for composite likelihoods and approximate Bayesian inference. Our methodology is illustrated for denoising and edge detection...

  17. Inferring time derivatives including cell growth rates using Gaussian processes

    Science.gov (United States)

    Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta

    2016-12-01

    Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.

  18. Learning with Uncertainty - Gaussian Processes and Relevance Vector Machines

    DEFF Research Database (Denmark)

    Candela, Joaquin Quinonero

    2004-01-01

    This thesis is concerned with Gaussian Processes (GPs) and Relevance Vector Machines (RVMs), both of which are particular instances of probabilistic linear models. We look at both models from a Bayesian perspective, and are forced to adopt an approximate Bayesian treatment to learning for two...... reasons. The first reason is the analytical intractability of the full Bayesian treatment and the fact that we in principle do not want to resort to sampling methods. The second reason, which incidentally justifies our not wanting to sample, is that we are interested in computationally efficient models...... approaches that ignore the accumulated uncertainty are way overconfident. Finally we explore a much harder problem: that of training with uncertain inputs. We explore approximating the full Bayesian treatment, which implies an analytically intractable integral. We propose two preliminary approaches...

  19. Bayesian regression of piecewise homogeneous Poisson processes

    Directory of Open Access Journals (Sweden)

    Diego Sevilla

    2015-12-01

    Full Text Available In this paper, a Bayesian method for piecewise regression is adapted to handle counting processes data distributed as Poisson. A numerical code in Mathematica is developed and tested analyzing simulated data. The resulting method is valuable for detecting breaking points in the count rate of time series for Poisson processes. Received: 2 November 2015, Accepted: 27 November 2015; Edited by: R. Dickman; Reviewed by: M. Hutter, Australian National University, Canberra, Australia.; DOI: http://dx.doi.org/10.4279/PIP.070018 Cite as: D J R Sevilla, Papers in Physics 7, 070018 (2015

  20. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  1. Gaussian random-matrix process and universal parametric correlations in complex systems

    International Nuclear Information System (INIS)

    Attias, H.; Alhassid, Y.

    1995-01-01

    We introduce the framework of the Gaussian random-matrix process as an extension of Dyson's Gaussian ensembles and use it to discuss the statistical properties of complex quantum systems that depend on an external parameter. We classify the Gaussian processes according to the short-distance diffusive behavior of their energy levels and demonstrate that all parametric correlation functions become universal upon the appropriate scaling of the parameter. The class of differentiable Gaussian processes is identified as the relevant one for most physical systems. We reproduce the known spectral correlators and compute eigenfunction correlators in their universal form. Numerical evidence from both a chaotic model and weakly disordered model confirms our predictions

  2. Test of the cosmic evolution using Gaussian processes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ming-Jian [Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Science, P.O. Box 918-3, Beijing 100049 (China); Xia, Jun-Qing, E-mail: zhangmj@ihep.ac.cn, E-mail: xiajq@bnu.edu.cn [Department of Astronomy, Beijing Normal University, No. 19, XinJieKouWai St., Beijing 100875 (China)

    2016-12-01

    Much focus was on the possible slowing down of cosmic acceleration under the dark energy parametrization. In the present paper, we investigate this subject using the Gaussian processes (GP), without resorting to a particular template of dark energy. The reconstruction is carried out by abundant data including luminosity distance from Union2, Union2.1 compilation and gamma-ray burst, and dynamical Hubble parameter. It suggests that slowing down of cosmic acceleration cannot be presented within 95% C.L., in considering the influence of spatial curvature and Hubble constant. In order to reveal the reason of tension between our reconstruction and previous parametrization constraint for Union2 data, we compare them and find that slowing down of acceleration in some parametrization is only a ''mirage'. Although these parameterizations fits well with the observational data, their tension can be revealed by high order derivative of distance D. Instead, GP method is able to faithfully model the cosmic expansion history.

  3. Hierarchical adaptive experimental design for Gaussian process emulators

    International Nuclear Information System (INIS)

    Busby, Daniel

    2009-01-01

    Large computer simulators have usually complex and nonlinear input output functions. This complicated input output relation can be analyzed by global sensitivity analysis; however, this usually requires massive Monte Carlo simulations. To effectively reduce the number of simulations, statistical techniques such as Gaussian process emulators can be adopted. The accuracy and reliability of these emulators strongly depend on the experimental design where suitable evaluation points are selected. In this paper a new sequential design strategy called hierarchical adaptive design is proposed to obtain an accurate emulator using the least possible number of simulations. The hierarchical design proposed in this paper is tested on various standard analytic functions and on a challenging reservoir forecasting application. Comparisons with standard one-stage designs such as maximin latin hypercube designs show that the hierarchical adaptive design produces a more accurate emulator with the same number of computer experiments. Moreover a stopping criterion is proposed that enables to perform the number of simulations necessary to obtain required approximation accuracy.

  4. Probabilistic sensitivity analysis of system availability using Gaussian processes

    International Nuclear Information System (INIS)

    Daneshkhah, Alireza; Bedford, Tim

    2013-01-01

    The availability of a system under a given failure/repair process is a function of time which can be determined through a set of integral equations and usually calculated numerically. We focus here on the issue of carrying out sensitivity analysis of availability to determine the influence of the input parameters. The main purpose is to study the sensitivity of the system availability with respect to the changes in the main parameters. In the simplest case that the failure repair process is (continuous time/discrete state) Markovian, explicit formulae are well known. Unfortunately, in more general cases availability is often a complicated function of the parameters without closed form solution. Thus, the computation of sensitivity measures would be time-consuming or even infeasible. In this paper, we show how Sobol and other related sensitivity measures can be cheaply computed to measure how changes in the model inputs (failure/repair times) influence the outputs (availability measure). We use a Bayesian framework, called the Bayesian analysis of computer code output (BACCO) which is based on using the Gaussian process as an emulator (i.e., an approximation) of complex models/functions. This approach allows effective sensitivity analysis to be achieved by using far smaller numbers of model runs than other methods. The emulator-based sensitivity measure is used to examine the influence of the failure and repair densities' parameters on the system availability. We discuss how to apply the methods practically in the reliability context, considering in particular the selection of parameters and prior distributions and how we can ensure these may be considered independent—one of the key assumptions of the Sobol approach. The method is illustrated on several examples, and we discuss the further implications of the technique for reliability and maintenance analysis

  5. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  6. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    Science.gov (United States)

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it

  7. Nested polynomial trends for the improvement of Gaussian process-based predictors

    Science.gov (United States)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  8. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  9. Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture

    Science.gov (United States)

    Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang

    2016-01-01

    The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176

  10. Global Optimization Employing Gaussian Process-Based Bayesian Surrogates

    Directory of Open Access Journals (Sweden)

    Roland Preuss

    2018-03-01

    Full Text Available The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach.

  11. A criterion for testing hypotheses about the covariance function of a stationary Gaussian stochastic process

    OpenAIRE

    Kozachenko, Yuriy; Troshki, Viktor

    2015-01-01

    We consider a measurable stationary Gaussian stochastic process. A criterion for testing hypotheses about the covariance function of such a process using estimates for its norm in the space $L_p(\\mathbb {T}),\\,p\\geq1$, is constructed.

  12. The Prediction of Length-of-day Variations Based on Gaussian Processes

    Science.gov (United States)

    Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.

    2015-01-01

    Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.

  13. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  14. On Sparse Multi-Task Gaussian Process Priors for Music Preference Learning

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Larsen, Jan

    In this paper we study pairwise preference learning in a music setting with multitask Gaussian processes and examine the effect of sparsity in the input space as well as in the actual judgments. To introduce sparsity in the inputs, we extend a classic pairwise likelihood model to support sparse...... simulation shows the performance on a real-world music preference dataset which motivates and demonstrates the potential of the sparse Gaussian process formulation for pairwise likelihoods....

  15. Bayesian Gaussian regression analysis of malnutrition for children under five years of age in Ethiopia, EMDHS 2014.

    Science.gov (United States)

    Mohammed, Seid; Asfaw, Zeytu G

    2018-01-01

    The term malnutrition generally refers to both under-nutrition and over-nutrition, but this study uses the term to refer solely to a deficiency of nutrition. In Ethiopia, child malnutrition is one of the most serious public health problem and the highest in the world. The purpose of the present study was to identify the high risk factors of malnutrition and test different statistical models for childhood malnutrition and, thereafter weighing the preferable model through model comparison criteria. Bayesian Gaussian regression model was used to analyze the effect of selected socioeconomic, demographic, health and environmental covariates on malnutrition under five years old child's. Inference was made using Bayesian approach based on Markov Chain Monte Carlo (MCMC) simulation techniques in BayesX. The study found that the variables such as sex of a child, preceding birth interval, age of the child, father's education level, source of water, mother's body mass index, head of household sex, mother's age at birth, wealth index, birth order, diarrhea, child's size at birth and duration of breast feeding showed significant effects on children's malnutrition in Ethiopia. The age of child, mother's age at birth and mother's body mass index could also be important factors with a non linear effect for the child's malnutrition in Ethiopia. Thus, the present study emphasizes a special care on variables such as sex of child, preceding birth interval, father's education level, source of water, sex of head of household, wealth index, birth order, diarrhea, child's size at birth, duration of breast feeding, age of child, mother's age at birth and mother's body mass index to combat childhood malnutrition in developing countries.

  16. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  17. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Michael [Univ. of Chicago, IL (United States)

    2017-03-13

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead to predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the

  18. Online sparse Gaussian process based human motion intent learning for an electrically actuated lower extremity exoskeleton.

    Science.gov (United States)

    Long, Yi; Du, Zhi-Jiang; Chen, Chao-Feng; Dong, Wei; Wang, Wei-Dong

    2017-07-01

    The most important step for lower extremity exoskeleton is to infer human motion intent (HMI), which contributes to achieve human exoskeleton collaboration. Since the user is in the control loop, the relationship between human robot interaction (HRI) information and HMI is nonlinear and complicated, which is difficult to be modeled by using mathematical approaches. The nonlinear approximation can be learned by using machine learning approaches. Gaussian Process (GP) regression is suitable for high-dimensional and small-sample nonlinear regression problems. GP regression is restrictive for large data sets due to its computation complexity. In this paper, an online sparse GP algorithm is constructed to learn the HMI. The original training dataset is collected when the user wears the exoskeleton system with friction compensation to perform unconstrained movement as far as possible. The dataset has two kinds of data, i.e., (1) physical HRI, which is collected by torque sensors placed at the interaction cuffs for the active joints, i.e., knee joints; (2) joint angular position, which is measured by optical position sensors. To reduce the computation complexity of GP, grey relational analysis (GRA) is utilized to specify the original dataset and provide the final training dataset. Those hyper-parameters are optimized offline by maximizing marginal likelihood and will be applied into online GP regression algorithm. The HMI, i.e., angular position of human joints, will be regarded as the reference trajectory for the mechanical legs. To verify the effectiveness of the proposed algorithm, experiments are performed on a subject at a natural speed. The experimental results show the HMI can be obtained in real time, which can be extended and employed in the similar exoskeleton systems.

  19. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  20. Gaussian processes retrieval of leaf parameters from a multi-species reflectance, absorbance and fluorescence dataset.

    Science.gov (United States)

    Van Wittenberghe, Shari; Verrelst, Jochem; Rivera, Juan Pablo; Alonso, Luis; Moreno, José; Samson, Roeland

    2014-05-05

    Biochemical and structural leaf properties such as chlorophyll content (Chl), nitrogen content (N), leaf water content (LWC), and specific leaf area (SLA) have the benefit to be estimated through nondestructive spectral measurements. Current practices, however, mainly focus on a limited amount of wavelength bands while more information could be extracted from other wavelengths in the full range (400-2500nm) spectrum. In this research, leaf characteristics were estimated from a field-based multi-species dataset, covering a wide range in leaf structures and Chl concentrations. The dataset contains leaves with extremely high Chl concentrations (>100μgcm(-2)), which are seldom estimated. Parameter retrieval was conducted with the machine learning regression algorithm Gaussian Processes (GP), which is able to perform adaptive, nonlinear data fitting for complex datasets. Moreover, insight in relevant bands is provided during the development of a regression model. Consequently, the physical meaning of the model can be explored. Best estimates of SLA, LWC and Chl yielded a best obtained normalized root mean square error of 6.0%, 7.7%, 9.1%, respectively. Several distinct wavebands were chosen across the whole spectrum. A band in the red edge (710nm) appeared to be most important for the estimation of Chl. Interestingly, spectral features related to biochemicals with a structural or carbon storage function (e.g. 1090, 1550, 1670, 1730nm) were found important not only for estimation of SLA, but also for LWC, Chl or N estimation. Similar, Chl estimation was also helped by some wavebands related to water content (950, 1430nm) due to correlation between the parameters. It is shown that leaf parameter retrieval by GP regression is successful, and able to cope with large structural differences between leaves. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.

    Science.gov (United States)

    Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H

    2016-11-01

    Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.

  2. Tail behaviour of Gaussian processes with applications to the Brownian pillow

    NARCIS (Netherlands)

    A.J. Koning (Alex); V. Protassov (Vladimir)

    2001-01-01

    textabstractIn this paper we investigate the tail behaviour of a random variable S which may be viewed as a functional T of a zero mean Gaussian process X, taking special interest in the situation where X obeys the structure which is typical for limiting processes ocurring in nonparametric testing

  3. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  4. A Gaussian decision-support tool for engineering design process

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Spitas, Christos

    2013-01-01

    Decision-making in design is of great importance, resulting in success or failure of a system (Liu et al., 2010; Roozenburg and Eekels, 1995; Spitas, 2011a). This paper describes a robust decision-support tool for engineering design process, which can be used throughout the design process in either

  5. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  6. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  7. Non-Gaussian Autoregressive Processes with Tukey g-and-h Transformations

    KAUST Repository

    Yan, Yuan

    2017-11-20

    When performing a time series analysis of continuous data, for example from climate or environmental problems, the assumption that the process is Gaussian is often violated. Therefore, we introduce two non-Gaussian autoregressive time series models that are able to fit skewed and heavy-tailed time series data. Our two models are based on the Tukey g-and-h transformation. We discuss parameter estimation, order selection, and forecasting procedures for our models and examine their performances in a simulation study. We demonstrate the usefulness of our models by applying them to two sets of wind speed data.

  8. Non-Gaussian Autoregressive Processes with Tukey g-and-h Transformations

    KAUST Repository

    Yan, Yuan; Genton, Marc G.

    2017-01-01

    When performing a time series analysis of continuous data, for example from climate or environmental problems, the assumption that the process is Gaussian is often violated. Therefore, we introduce two non-Gaussian autoregressive time series models that are able to fit skewed and heavy-tailed time series data. Our two models are based on the Tukey g-and-h transformation. We discuss parameter estimation, order selection, and forecasting procedures for our models and examine their performances in a simulation study. We demonstrate the usefulness of our models by applying them to two sets of wind speed data.

  9. Learning Curves and Bootstrap Estimates for Inference with Gaussian Processes: A Statistical Mechanics Study

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We employ the replica method of statistical physics to study the average case performance of learning systems. The new feature of our theory is that general distributions of data can be treated, which enables applications to real data. For a class of Bayesian prediction models which are based...... on Gaussian processes, we discuss Bootstrap estimates for learning curves....

  10. Interplay of nonclassicality and entanglement of two-mode Gaussian fields generated in optical parametric processes

    Czech Academy of Sciences Publication Activity Database

    Arkhipov, Ie.I.; Peřina, Jan; Peřina, J.; Miranowicz, A.

    2016-01-01

    Roč. 94, č. 1 (2016), 1-15, č. článku 013807. ISSN 2469-9926 R&D Projects: GA ČR GAP205/12/0382 Institutional support: RVO:68378271 Keywords : two-mode Gaussian fields * optical parametric processes Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.925, year: 2016

  11. Characterizing Dynamic Walking Patterns and Detecting Falls with Wearable Sensors Using Gaussian Process Methods

    Directory of Open Access Journals (Sweden)

    Taehwan Kim

    2017-05-01

    Full Text Available By incorporating a growing number of sensors and adopting machine learning technologies, wearable devices have recently become a prominent health care application domain. Among the related research topics in this field, one of the most important issues is detecting falls while walking. Since such falls may lead to serious injuries, automatically and promptly detecting them during daily use of smartphones and/or smart watches is a particular need. In this paper, we investigate the use of Gaussian process (GP methods for characterizing dynamic walking patterns and detecting falls while walking with built-in wearable sensors in smartphones and/or smartwatches. For the task of characterizing dynamic walking patterns in a low-dimensional latent feature space, we propose a novel approach called auto-encoded Gaussian process dynamical model, in which we combine a GP-based state space modeling method with a nonlinear dimensionality reduction method in a unique manner. The Gaussian process methods are fit for this task because one of the most import strengths of the Gaussian process methods is its capability of handling uncertainty in the model parameters. Also for detecting falls while walking, we propose to recycle the latent samples generated in training the auto-encoded Gaussian process dynamical model for GP-based novelty detection, which can lead to an efficient and seamless solution to the detection task. Experimental results show that the combined use of these GP-based methods can yield promising results for characterizing dynamic walking patterns and detecting falls while walking with the wearable sensors.

  12. On Small Deviation Asymptotics In L2 of Some Mixed Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Alexander I. Nazarov

    2018-04-01

    Full Text Available We study the exact small deviation asymptotics with respect to the Hilbert norm for some mixed Gaussian processes. The simplest example here is the linear combination of the Wiener process and the Brownian bridge. We get the precise final result in this case and in some examples of more complicated processes of similar structure. The proof is based on Karhunen–Loève expansion together with spectral asymptotics of differential operators and complex analysis methods.

  13. A Bayesian optimal design for degradation tests based on the inverse Gaussian process

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Weiwen; Liu, Yu; Li, Yan Feng; Zhu, Shun Peng; Huang, Hong Zhong [University of Electronic Science and Technology of China, Chengdu (China)

    2014-10-15

    The inverse Gaussian process is recently introduced as an attractive and flexible stochastic process for degradation modeling. This process has been demonstrated as a valuable complement for models that are developed on the basis of the Wiener and gamma processes. We investigate the optimal design of the degradation tests on the basis of the inverse Gaussian process. In addition to an optimal design with pre-estimated planning values of model parameters, we also address the issue of uncertainty in the planning values by using the Bayesian method. An average pre-posterior variance of reliability is used as the optimization criterion. A trade-off between sample size and number of degradation observations is investigated in the degradation test planning. The effects of priors on the optimal designs and on the value of prior information are also investigated and quantified. The degradation test planning of a GaAs Laser device is performed to demonstrate the proposed method.

  14. Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes

    Directory of Open Access Journals (Sweden)

    Junichi Hirukawa

    2012-01-01

    Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.

  15. Gaussian process inference for estimating pharmacokinetic parameters of dynamic contrast-enhanced MR images.

    Science.gov (United States)

    Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M

    2012-01-01

    In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.

  16. Improvement on Exoplanet Detection Methods and Analysis via Gaussian Process Fitting Techniques

    Science.gov (United States)

    Van Ross, Bryce; Teske, Johanna

    2018-01-01

    Planetary signals in radial velocity (RV) data are often accompanied by signals coming solely from stellar photo- or chromospheric variation. Such variation can reduce the precision of planet detection and mass measurements, and cause misidentification of planetary signals. Recently, several authors have demonstrated the utility of Gaussian Process (GP) regression for disentangling planetary signals in RV observations (Aigrain et al. 2012; Angus et al. 2017; Czekala et al. 2017; Faria et al. 2016; Gregory 2015; Haywood et al. 2014; Rajpaul et al. 2015; Foreman-Mackey et al. 2017). GP models the covariance of multivariate data to make predictions about likely underlying trends in the data, which can be applied to regions where there are no existing observations. The potency of GP has been used to infer stellar rotation periods; to model and disentangle time series spectra; and to determine physical aspects, populations, and detection of exoplanets, among other astrophysical applications. Here, we implement similar analysis techniques to times series of the Ca-2 H and K activity indicator measured simultaneously with RVs in a small sample of stars from the large Keck/HIRES RV planet search program. Our goal is to characterize the pattern(s) of non-planetary variation to be able to know what is/ is not a planetary signal. We investigated ten different GP kernels and their respective hyperparameters to determine the optimal combination (e.g., the lowest Bayesian Information Criterion value) in each stellar data set. To assess the hyperparameters’ error, we sampled their posterior distributions using Markov chain Monte Carlo (MCMC) analysis on the optimized kernels. Our results demonstrate how GP analysis of stellar activity indicators alone can contribute to exoplanet detection in RV data, and highlight the challenges in applying GP analysis to relatively small, irregularly sampled time series.

  17. The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes

    Directory of Open Access Journals (Sweden)

    V. K. Hohlov

    2014-01-01

    Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation

  18. On the joint distribution of excursion duration and amplitude of a narrow-band Gaussian process

    DEFF Research Database (Denmark)

    Ghane, Mahdi; Gao, Zhen; Blanke, Mogens

    2018-01-01

    of amplitude and period are limited to excursion through a mean-level or to describe the asymptotic behavior of high level excursions. This paper extends the knowledge by presenting a theoretical derivation of probability of wave exceedance amplitude and duration, for a narrow-band Gaussian process......The probability density of crest amplitude and of duration of exceeding a given level are used in many theoretical and practical problems in engineering. The joint density is essential for design of constructions that are subjected to waves and wind. The presently available joint distributions...... distribution, as expected, and that the marginal distribution of excursion duration works both for asymptotic and non-asymptotic cases. The suggested model is found to be a good replacement for the empirical distributions that are widely used. Results from simulations of narrow-band Gaussian processes, real...

  19. Generalized Inferences about the Mean Vector of Several Multivariate Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Pilar Ibarrola

    2015-01-01

    Full Text Available We consider in this paper the problem of comparing the means of several multivariate Gaussian processes. It is assumed that the means depend linearly on an unknown vector parameter θ and that nuisance parameters appear in the covariance matrices. More precisely, we deal with the problem of testing hypotheses, as well as obtaining confidence regions for θ. Both methods will be based on the concepts of generalized p value and generalized confidence region adapted to our context.

  20. Implicit Treatment of Technical Specification and Thermal Hydraulic Parameter Uncertainties in Gaussian Process Model to Estimate Safety Margin

    Directory of Open Access Journals (Sweden)

    Douglas A. Fynan

    2016-06-01

    Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.

  1. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    International Nuclear Information System (INIS)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-01-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  2. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Science.gov (United States)

    Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial

    2016-09-01

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  3. Gaussian processes with built-in dimensionality reduction: Applications to high-dimensional uncertainty propagation

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu

    2016-09-15

    Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the

  4. Analysis of multi-species point patterns using multivariate log Gaussian Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao; Jalilian, Abdollah

    Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address t...... of the data. The selected number of common latent fields provides an index of complexity of the multivariate covariance structure. Hierarchical clustering is used to identify groups of species with similar patterns of dependence on the common latent fields.......Multivariate log Gaussian Cox processes are flexible models for multivariate point patterns. However, they have so far only been applied in bivariate cases. In this paper we move beyond the bivariate case in order to model multi-species point patterns of tree locations. In particular we address...... the problems of identifying parsimonious models and of extracting biologically relevant information from the fitted models. The latent multivariate Gaussian field is decomposed into components given in terms of random fields common to all species and components which are species specific. This allows...

  5. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    Science.gov (United States)

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method. Copyright © 2013, The International Biometric Society.

  6. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  7. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    Science.gov (United States)

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  8. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  9. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  10. a Geographic Weighted Regression for Rural Highways Crashes Modelling Using the Gaussian and Tricube Kernels: a Case Study of USA Rural Highways

    Science.gov (United States)

    Aghayari, M.; Pahlavani, P.; Bigdeli, B.

    2017-09-01

    Based on world health organization (WHO) report, driving incidents are counted as one of the eight initial reasons for death in the world. The purpose of this paper is to develop a method for regression on effective parameters of highway crashes. In the traditional methods, it was assumed that the data are completely independent and environment is homogenous while the crashes are spatial events which are occurring in geographic space and crashes have spatial data. Spatial data have spatial features such as spatial autocorrelation and spatial non-stationarity in a way working with them is going to be a bit difficult. The proposed method has implemented on a set of records of fatal crashes that have been occurred in highways connecting eight east states of US. This data have been recorded between the years 2007 and 2009. In this study, we have used GWR method with two Gaussian and Tricube kernels. The Number of casualties has been considered as dependent variable and number of persons in crash, road alignment, number of lanes, pavement type, surface condition, road fence, light condition, vehicle type, weather, drunk driver, speed limitation, harmful event, road profile, and junction type have been considered as explanatory variables according to previous studies in using GWR method. We have compered the results of implementation with OLS method. Results showed that R2 for OLS method is 0.0654 and for the proposed method is 0.9196 that implies the proposed GWR is better method for regression in rural highway crashes.

  11. Analytical and regression models of glass rod drawing process

    Science.gov (United States)

    Alekseeva, L. B.

    2018-03-01

    The process of drawing glass rods (light guides) is being studied. The parameters of the process affecting the quality of the light guide have been determined. To solve the problem, mathematical models based on general equations of continuum mechanics are used. The conditions for the stable flow of the drawing process have been found, which are determined by the stability of the motion of the glass mass in the formation zone to small uncontrolled perturbations. The sensitivity of the formation zone to perturbations of the drawing speed and viscosity is estimated. Experimental models of the drawing process, based on the regression analysis methods, have been obtained. These models make it possible to customize a specific production process to obtain light guides of the required quality. They allow one to find the optimum combination of process parameters in the chosen area and to determine the required accuracy of maintaining them at a specified level.

  12. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Science.gov (United States)

    McDowell, Ian C; Manandhar, Dinesh; Vockley, Christopher M; Schmid, Amy K; Reddy, Timothy E; Engelhardt, Barbara E

    2018-01-01

    Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP), which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  13. Clustering gene expression time series data using an infinite Gaussian process mixture model.

    Directory of Open Access Journals (Sweden)

    Ian C McDowell

    2018-01-01

    Full Text Available Transcriptome-wide time series expression profiling is used to characterize the cellular response to environmental perturbations. The first step to analyzing transcriptional response data is often to cluster genes with similar responses. Here, we present a nonparametric model-based method, Dirichlet process Gaussian process mixture model (DPGP, which jointly models data clusters with a Dirichlet process and temporal dependencies with Gaussian processes. We demonstrate the accuracy of DPGP in comparison to state-of-the-art approaches using hundreds of simulated data sets. To further test our method, we apply DPGP to published microarray data from a microbial model organism exposed to stress and to novel RNA-seq data from a human cell line exposed to the glucocorticoid dexamethasone. We validate our clusters by examining local transcription factor binding and histone modifications. Our results demonstrate that jointly modeling cluster number and temporal dependencies can reveal shared regulatory mechanisms. DPGP software is freely available online at https://github.com/PrincetonUniversity/DP_GP_cluster.

  14. Using Gaussian Process Annealing Particle Filter for 3D Human Tracking

    Directory of Open Access Journals (Sweden)

    Michael Rudzsky

    2008-01-01

    Full Text Available We present an approach for human body parts tracking in 3D with prelearned motion models using multiple cameras. Gaussian process annealing particle filter is proposed for tracking in order to reduce the dimensionality of the problem and to increase the tracker's stability and robustness. Comparing with a regular annealed particle filter-based tracker, we show that our algorithm can track better for low frame rate videos. We also show that our algorithm is capable of recovering after a temporal target loss.

  15. Probabilistic electricity price forecasting with variational heteroscedastic Gaussian process and active learning

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Lin; Lou, Jianyong

    2015-01-01

    Highlights: • A novel active learning model for the probabilistic electricity price forecasting. • Heteroscedastic Gaussian process that captures the local volatility of the electricity price. • Variational Bayesian learning that avoids over-fitting. • Active learning algorithm that reduces the computational efforts. - Abstract: Electricity price forecasting is essential for the market participants in their decision making. Nevertheless, the accuracy of such forecasting cannot be guaranteed due to the high variability of the price data. For this reason, in many cases, rather than merely point forecasting results, market participants are more interested in the probabilistic price forecasting results, i.e., the prediction intervals of the electricity price. Focusing on this issue, this paper proposes a new model for the probabilistic electricity price forecasting. This model is based on the active learning technique and the variational heteroscedastic Gaussian process (VHGP). It provides the heteroscedastic Gaussian prediction intervals, which effectively quantify the heteroscedastic uncertainties associated with the price data. Because the high computational effort of VHGP hinders its application to the large-scale electricity price forecasting tasks, we design an active learning algorithm to select a most informative training subset from the whole available training set. By constructing the forecasting model on this smaller subset, the computational efforts can be significantly reduced. In this way, the practical applicability of the proposed model is enhanced. The forecasting performance and the computational time of the proposed model are evaluated using the real-world electricity price data, which is obtained from the ANEM, PJM, and New England ISO

  16. A Gaussian process and derivative spectral-based algorithm for red blood cell segmentation

    Science.gov (United States)

    Xue, Yingying; Wang, Jianbiao; Zhou, Mei; Hou, Xiyue; Li, Qingli; Liu, Hongying; Wang, Yiting

    2017-07-01

    As an imaging technology used in remote sensing, hyperspectral imaging can provide more information than traditional optical imaging of blood cells. In this paper, an AOTF based microscopic hyperspectral imaging system is used to capture hyperspectral images of blood cells. In order to achieve the segmentation of red blood cells, Gaussian process using squared exponential kernel function is applied first after the data preprocessing to make the preliminary segmentation. The derivative spectrum with spectral angle mapping algorithm is then applied to the original image to segment the boundary of cells, and using the boundary to cut out cells obtained from the Gaussian process to separated adjacent cells. Then the morphological processing method including closing, erosion and dilation is applied so as to keep adjacent cells apart, and by applying median filtering to remove noise points and filling holes inside the cell, the final segmentation result can be obtained. The experimental results show that this method appears better segmentation effect on human red blood cells.

  17. Thermal time constant: optimising the skin temperature predictive modelling in lower limb prostheses using Gaussian processes.

    Science.gov (United States)

    Mathur, Neha; Glesk, Ivan; Buis, Arjan

    2016-06-01

    Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.

  18. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    Science.gov (United States)

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  19. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    Directory of Open Access Journals (Sweden)

    Tomoaki Nakamura

    2017-12-01

    Full Text Available Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM, the emission distributions of which are Gaussian processes (GPs. Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods.

  20. Gaussian process tomography for soft x-ray spectroscopy at WEST without equilibrium information

    Science.gov (United States)

    Wang, T.; Mazon, D.; Svensson, J.; Li, D.; Jardin, A.; Verdoolaege, G.

    2018-06-01

    Gaussian process tomography (GPT) is a recently developed tomography method based on the Bayesian probability theory [J. Svensson, JET Internal Report EFDA-JET-PR(11)24, 2011 and Li et al., Rev. Sci. Instrum. 84, 083506 (2013)]. By modeling the soft X-ray (SXR) emissivity field in a poloidal cross section as a Gaussian process, the Bayesian SXR tomography can be carried out in a robust and extremely fast way. Owing to the short execution time of the algorithm, GPT is an important candidate for providing real-time reconstructions with a view to impurity transport and fast magnetohydrodynamic control. In addition, the Bayesian formalism allows quantifying uncertainty on the inferred parameters. In this paper, the GPT technique is validated using a synthetic data set expected from the WEST tokamak, and the results are shown of its application to the reconstruction of SXR emissivity profiles measured on Tore Supra. The method is compared with the standard algorithm based on minimization of the Fisher information.

  1. A GEOGRAPHIC WEIGHTED REGRESSION FOR RURAL HIGHWAYS CRASHES MODELLING USING THE GAUSSIAN AND TRICUBE KERNELS: A CASE STUDY OF USA RURAL HIGHWAYS

    Directory of Open Access Journals (Sweden)

    M. Aghayari

    2017-09-01

    Full Text Available Based on world health organization (WHO report, driving incidents are counted as one of the eight initial reasons for death in the world. The purpose of this paper is to develop a method for regression on effective parameters of highway crashes. In the traditional methods, it was assumed that the data are completely independent and environment is homogenous while the crashes are spatial events which are occurring in geographic space and crashes have spatial data. Spatial data have spatial features such as spatial autocorrelation and spatial non-stationarity in a way working with them is going to be a bit difficult. The proposed method has implemented on a set of records of fatal crashes that have been occurred in highways connecting eight east states of US. This data have been recorded between the years 2007 and 2009. In this study, we have used GWR method with two Gaussian and Tricube kernels. The Number of casualties has been considered as dependent variable and number of persons in crash, road alignment, number of lanes, pavement type, surface condition, road fence, light condition, vehicle type, weather, drunk driver, speed limitation, harmful event, road profile, and junction type have been considered as explanatory variables according to previous studies in using GWR method. We have compered the results of implementation with OLS method. Results showed that R2 for OLS method is 0.0654 and for the proposed method is 0.9196 that implies the proposed GWR is better method for regression in rural highway crashes.

  2. Nonparametric Regression Estimation for Multivariate Null Recurrent Processes

    Directory of Open Access Journals (Sweden)

    Biqing Cai

    2015-04-01

    Full Text Available This paper discusses nonparametric kernel regression with the regressor being a \\(d\\-dimensional \\(\\beta\\-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \\(\\sqrt{n(Th^{d}}\\, where \\(n(T\\ is the number of regenerations for a \\(\\beta\\-null recurrent process and the limiting distribution (with proper normalization is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.

  3. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Cheung Leo

    2007-02-01

    Full Text Available Abstract Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make

  4. Bayesian electron density inference from JET lithium beam emission spectra using Gaussian processes

    Science.gov (United States)

    Kwak, Sehyun; Svensson, J.; Brix, M.; Ghim, Y.-C.; Contributors, JET

    2017-03-01

    A Bayesian model to infer edge electron density profiles is developed for the JET lithium beam emission spectroscopy (Li-BES) system, measuring Li I (2p-2s) line radiation using 26 channels with  ∼1 cm spatial resolution and 10∼ 20 ms temporal resolution. The density profile is modelled using a Gaussian process prior, and the uncertainty of the density profile is calculated by a Markov Chain Monte Carlo (MCMC) scheme. From the spectra measured by the transmission grating spectrometer, the Li I line intensities are extracted, and modelled as a function of the plasma density by a multi-state model which describes the relevant processes between neutral lithium beam atoms and plasma particles. The spectral model fully takes into account interference filter and instrument effects, that are separately estimated, again using Gaussian processes. The line intensities are inferred based on a spectral model consistent with the measured spectra within their uncertainties, which includes photon statistics and electronic noise. Our newly developed method to infer JET edge electron density profiles has the following advantages in comparison to the conventional method: (i) providing full posterior distributions of edge density profiles, including their associated uncertainties, (ii) the available radial range for density profiles is increased to the full observation range (∼26 cm), (iii) an assumption of monotonic electron density profile is not necessary, (iv) the absolute calibration factor of the diagnostic system is automatically estimated overcoming the limitation of the conventional technique and allowing us to infer the electron density profiles for all pulses without preprocessing the data or an additional boundary condition, and (v) since the full spectrum is modelled, the procedure of modulating the beam to measure the background signal is only necessary for the case of overlapping of the Li I line with impurity lines.

  5. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    Science.gov (United States)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  6. Asymptotic analysis of the role of spatial sampling for covariance parameter estimation of Gaussian processes

    International Nuclear Information System (INIS)

    Bachoc, Francois

    2014-01-01

    Covariance parameter estimation of Gaussian processes is analyzed in an asymptotic framework. The spatial sampling is a randomly perturbed regular grid and its deviation from the perfect regular grid is controlled by a single scalar regularity parameter. Consistency and asymptotic normality are proved for the Maximum Likelihood and Cross Validation estimators of the covariance parameters. The asymptotic covariance matrices of the covariance parameter estimators are deterministic functions of the regularity parameter. By means of an exhaustive study of the asymptotic covariance matrices, it is shown that the estimation is improved when the regular grid is strongly perturbed. Hence, an asymptotic confirmation is given to the commonly admitted fact that using groups of observation points with small spacing is beneficial to covariance function estimation. Finally, the prediction error, using a consistent estimator of the covariance parameters, is analyzed in detail. (authors)

  7. Modeling of the Monthly Rainfall-Runoff Process Through Regressions

    Directory of Open Access Journals (Sweden)

    Campos-Aranda Daniel Francisco

    2014-10-01

    Full Text Available To solve the problems associated with the assessment of water resources of a river, the modeling of the rainfall-runoff process (RRP allows the deduction of runoff missing data and to extend its record, since generally the information available on precipitation is larger. It also enables the estimation of inputs to reservoirs, when their building led to the suppression of the gauging station. The simplest mathematical model that can be set for the RRP is the linear regression or curve on a monthly basis. Such a model is described in detail and is calibrated with the simultaneous record of monthly rainfall and runoff in Ballesmi hydrometric station, which covers 35 years. Since the runoff of this station has an important contribution from the spring discharge, the record is corrected first by removing that contribution. In order to do this a procedure was developed based either on the monthly average regional runoff coefficients or on nearby and similar watershed; in this case the Tancuilín gauging station was used. Both stations belong to the Partial Hydrologic Region No. 26 (Lower Rio Panuco and are located within the state of San Luis Potosi, México. The study performed indicates that the monthly regression model, due to its conceptual approach, faithfully reproduces monthly average runoff volumes and achieves an excellent approximation in relation to the dispersion, proved by calculation of the means and standard deviations.

  8. Laser Raman detection for oral cancer based on a Gaussian process classification method

    International Nuclear Information System (INIS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Zhang, Chijun; Chen, He; Luo, Yusheng; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Shen, Aiguo; Hu, Jiming; Jia, Jun

    2013-01-01

    Oral squamous cell carcinoma is the most common neoplasm of the oral cavity. The incidence rate accounts for 80% of total oral cancer and shows an upward trend in recent years. It has a high degree of malignancy and is difficult to detect in terms of differential diagnosis, as a consequence of which the timing of treatment is always delayed. In this work, Raman spectroscopy was adopted to differentially diagnose oral squamous cell carcinoma and oral gland carcinoma. In total, 852 entries of raw spectral data which consisted of 631 items from 36 oral squamous cell carcinoma patients, 87 items from four oral gland carcinoma patients and 134 items from five normal people were collected by utilizing an optical method on oral tissues. The probability distribution of the datasets corresponding to the spectral peaks of the oral squamous cell carcinoma tissue was analyzed and the experimental result showed that the data obeyed a normal distribution. Moreover, the distribution characteristic of the noise was also in compliance with a Gaussian distribution. A Gaussian process (GP) classification method was utilized to distinguish the normal people and the oral gland carcinoma patients from the oral squamous cell carcinoma patients. The experimental results showed that all the normal people could be recognized. 83.33% of the oral squamous cell carcinoma patients could be correctly diagnosed and the remaining ones would be diagnosed as having oral gland carcinoma. For the classification process of oral gland carcinoma and oral squamous cell carcinoma, the correct ratio was 66.67% and the erroneously diagnosed percentage was 33.33%. The total sensitivity was 80% and the specificity was 100% with the Matthews correlation coefficient (MCC) set to 0.447 213 595. Considering the numerical results above, the application prospects and clinical value of this technique are significantly impressive. (letter)

  9. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  10. Laser Raman detection for oral cancer based on a Gaussian process classification method

    Science.gov (United States)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Zhang, Chijun; Chen, He; Luo, Yusheng; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-06-01

    Oral squamous cell carcinoma is the most common neoplasm of the oral cavity. The incidence rate accounts for 80% of total oral cancer and shows an upward trend in recent years. It has a high degree of malignancy and is difficult to detect in terms of differential diagnosis, as a consequence of which the timing of treatment is always delayed. In this work, Raman spectroscopy was adopted to differentially diagnose oral squamous cell carcinoma and oral gland carcinoma. In total, 852 entries of raw spectral data which consisted of 631 items from 36 oral squamous cell carcinoma patients, 87 items from four oral gland carcinoma patients and 134 items from five normal people were collected by utilizing an optical method on oral tissues. The probability distribution of the datasets corresponding to the spectral peaks of the oral squamous cell carcinoma tissue was analyzed and the experimental result showed that the data obeyed a normal distribution. Moreover, the distribution characteristic of the noise was also in compliance with a Gaussian distribution. A Gaussian process (GP) classification method was utilized to distinguish the normal people and the oral gland carcinoma patients from the oral squamous cell carcinoma patients. The experimental results showed that all the normal people could be recognized. 83.33% of the oral squamous cell carcinoma patients could be correctly diagnosed and the remaining ones would be diagnosed as having oral gland carcinoma. For the classification process of oral gland carcinoma and oral squamous cell carcinoma, the correct ratio was 66.67% and the erroneously diagnosed percentage was 33.33%. The total sensitivity was 80% and the specificity was 100% with the Matthews correlation coefficient (MCC) set to 0.447 213 595. Considering the numerical results above, the application prospects and clinical value of this technique are significantly impressive.

  11. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  12. Maxima estimate of non gaussian process from observation of time history samples

    International Nuclear Information System (INIS)

    Borsoi, L.

    1987-01-01

    The problem constitutes a formidable task but is essential for industrial applications: extreme value design, fatigue analysis, etc. Even for the linear Gaussian case, the process ergodicity does not prevent the observation duration to be long enough to make reliable estimates. As well known, this duration is closely related to the process autocorrelation. A subterfuge, which distorts a little the problem, consists in considering periodic random process and in adjusting the observation duration to a complete period. In the nonlinear case, the stated problem is as much important as time history simulation is presently the only practicable way for analysing structures. Thus it is always interesting to adjust a tractable model to rough time history observations. In some cases this can be done with a Gumble-Poisson model. Then the difficulty is to make reliable estimates of the parameters involved in the model. Unfortunately it seems that even the use of sophisticated Bayesian method does not permit to reduce as wanted the necessary observation duration. One of the difficulties lies in process ergodicity which is often assumed to be based on physical considerations but which is not always rigorously stated. An other difficulty is the confusion between hidden informations - which can be extracted - and missing informations - which cannot be extracted. Finally it must be recalled that the obligation of considering time histories long enough is not always embarrassing due to the current computer cost reduction. (orig./HP)

  13. Persistence of non-Markovian Gaussian stationary processes in discrete time

    Science.gov (United States)

    Nyberg, Markus; Lizana, Ludvig

    2018-04-01

    The persistence of a stochastic variable is the probability that it does not cross a given level during a fixed time interval. Although persistence is a simple concept to understand, it is in general hard to calculate. Here we consider zero mean Gaussian stationary processes in discrete time n . Few results are known for the persistence P0(n ) in discrete time, except the large time behavior which is characterized by the nontrivial constant θ through P0(n ) ˜θn . Using a modified version of the independent interval approximation (IIA) that we developed before, we are able to calculate P0(n ) analytically in z -transform space in terms of the autocorrelation function A (n ) . If A (n )→0 as n →∞ , we extract θ numerically, while if A (n )=0 , for finite n >N , we find θ exactly (within the IIA). We apply our results to three special cases: the nearest-neighbor-correlated "first order moving average process", where A (n )=0 for n >1 , the double exponential-correlated "second order autoregressive process", where A (n ) =c1λ1n+c2λ2n , and power-law-correlated variables, where A (n ) ˜n-μ . Apart from the power-law case when μ <5 , we find excellent agreement with simulations.

  14. Computerized prediction of intensive care unit discharge after cardiac surgery: development and validation of a Gaussian processes model

    Directory of Open Access Journals (Sweden)

    Meyfroidt Geert

    2011-10-01

    Full Text Available Abstract Background The intensive care unit (ICU length of stay (LOS of patients undergoing cardiac surgery may vary considerably, and is often difficult to predict within the first hours after admission. The early clinical evolution of a cardiac surgery patient might be predictive for his LOS. The purpose of the present study was to develop a predictive model for ICU discharge after non-emergency cardiac surgery, by analyzing the first 4 hours of data in the computerized medical record of these patients with Gaussian processes (GP, a machine learning technique. Methods Non-interventional study. Predictive modeling, separate development (n = 461 and validation (n = 499 cohort. GP models were developed to predict the probability of ICU discharge the day after surgery (classification task, and to predict the day of ICU discharge as a discrete variable (regression task. GP predictions were compared with predictions by EuroSCORE, nurses and physicians. The classification task was evaluated using aROC for discrimination, and Brier Score, Brier Score Scaled, and Hosmer-Lemeshow test for calibration. The regression task was evaluated by comparing median actual and predicted discharge, loss penalty function (LPF ((actual-predicted/actual and calculating root mean squared relative errors (RMSRE. Results Median (P25-P75 ICU length of stay was 3 (2-5 days. For classification, the GP model showed an aROC of 0.758 which was significantly higher than the predictions by nurses, but not better than EuroSCORE and physicians. The GP had the best calibration, with a Brier Score of 0.179 and Hosmer-Lemeshow p-value of 0.382. For regression, GP had the highest proportion of patients with a correctly predicted day of discharge (40%, which was significantly better than the EuroSCORE (p Conclusions A GP model that uses PDMS data of the first 4 hours after admission in the ICU of scheduled adult cardiac surgery patients was able to predict discharge from the ICU as a

  15. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    International Nuclear Information System (INIS)

    Li, Dong; Svensson, J.; Thomsen, H.; Werner, A.; Wolf, R.; Medina, F.

    2013-01-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods

  16. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    Science.gov (United States)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  17. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    Science.gov (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  18. Gaussian process tomography for the analysis of line-integrated measurements in fusion plasmas

    International Nuclear Information System (INIS)

    Li, Dong

    2014-01-01

    In nuclear fusion research, a variety of diagnostics have been devised for the measurements of different physical quantities, such as electromagnetic radiation in different wavelength intervals. The radiation, including the soft X-ray spectral range, H α emission as well as others, can be recorded by specifically designed detectors with different sampling frequencies. Commonly, only the line-integrated observations are possible due to the fact that the detectors have to view the plasma from a position outside of the plasma. Therefore, tomography algorithms have been developed to infer the local information of the targeted physical variable from a number of line-integrated data. This thesis presents a Bayesian Gaussian Process Tomographic (GPT) method applied to both soft X-ray and bolometer systems. For the ill-posed inversion problem of reconstructing a 2D emissivity distribution from a number of noisy line-integrated data, Bayesian probability theory can provide a posterior probability distribution about many possible solutions centered at a single most probable solution. The combination of Gaussian Process (GP) prior and multivariate normal (MVN) likelihood enables the posterior probability to be a MVN distribution which provides both the solution and its associated uncertainty. The GP prior enforces the regularization on smoothness by adjusting the length-scale defined in a covariance function. Particularly, a non-stationary GP has been developed to improve the accuracy of reconstruction by using locally adaptive length-scales to take into account the varying smoothness at different positions. The parameters embedded in the model assumption can be optimized through maximizing a joint probability of them based on a Bayesian Occam's razor formalism. In contrast with other tomographic techniques, this method is analytic and non-iterative, thus it can be fast enough for real-time applications under an approximate optimization state. The uncertainty of the

  19. A Novel Method for Generating Non-Stationary Gaussian Processes for Use in Digital Radar Simulators

    National Research Council Canada - National Science Library

    Boehm, James A; Debroux, Patrick S

    2007-01-01

    This report presents a novel and simple way to determine the transient response of the output of any linear system, described in the s-domain by an nth order polynomial, subjected to white Gaussian noise...

  20. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    Science.gov (United States)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  1. INTERACTIVE CHANGE DETECTION USING HIGH RESOLUTION REMOTE SENSING IMAGES BASED ON ACTIVE LEARNING WITH GAUSSIAN PROCESSES

    Directory of Open Access Journals (Sweden)

    H. Ru

    2016-06-01

    Full Text Available Although there have been many studies for change detection, the effective and efficient use of high resolution remote sensing images is still a problem. Conventional supervised methods need lots of annotations to classify the land cover categories and detect their changes. Besides, the training set in supervised methods often has lots of redundant samples without any essential information. In this study, we present a method for interactive change detection using high resolution remote sensing images with active learning to overcome the shortages of existing remote sensing image change detection techniques. In our method, there is no annotation of actual land cover category at the beginning. First, we find a certain number of the most representative objects in unsupervised way. Then, we can detect the change areas from multi-temporal high resolution remote sensing images by active learning with Gaussian processes in an interactive way gradually until the detection results do not change notably. The artificial labelling can be reduced substantially, and a desirable detection result can be obtained in a few iterations. The experiments on Geo-Eye1 and WorldView2 remote sensing images demonstrate the effectiveness and efficiency of our proposed method.

  2. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    Science.gov (United States)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  3. Bayesian Sensitivity Analysis of a Cardiac Cell Model Using a Gaussian Process Emulator

    Science.gov (United States)

    Chang, Eugene T Y; Strong, Mark; Clayton, Richard H

    2015-01-01

    Models of electrical activity in cardiac cells have become important research tools as they can provide a quantitative description of detailed and integrative physiology. However, cardiac cell models have many parameters, and how uncertainties in these parameters affect the model output is difficult to assess without undertaking large numbers of model runs. In this study we show that a surrogate statistical model of a cardiac cell model (the Luo-Rudy 1991 model) can be built using Gaussian process (GP) emulators. Using this approach we examined how eight outputs describing the action potential shape and action potential duration restitution depend on six inputs, which we selected to be the maximum conductances in the Luo-Rudy 1991 model. We found that the GP emulators could be fitted to a small number of model runs, and behaved as would be expected based on the underlying physiology that the model represents. We have shown that an emulator approach is a powerful tool for uncertainty and sensitivity analysis in cardiac cell models. PMID:26114610

  4. Bayesian modeling of JET Li-BES for edge electron density profiles using Gaussian processes

    Science.gov (United States)

    Kwak, Sehyun; Svensson, Jakob; Brix, Mathias; Ghim, Young-Chul; JET Contributors Collaboration

    2015-11-01

    A Bayesian model for the JET lithium beam emission spectroscopy (Li-BES) system has been developed to infer edge electron density profiles. The 26 spatial channels measure emission profiles with ~15 ms temporal resolution and ~1 cm spatial resolution. The lithium I (2p-2s) line radiation in an emission spectrum is calculated using a multi-state model, which expresses collisions between the neutral lithium beam atoms and the plasma particles as a set of differential equations. The emission spectrum is described in the model including photon and electronic noise, spectral line shapes, interference filter curves, and relative calibrations. This spectral modeling gets rid of the need of separate background measurements for calculating the intensity of the line radiation. Gaussian processes are applied to model both emission spectrum and edge electron density profile, and the electron temperature to calculate all the rate coefficients is obtained from the JET high resolution Thomson scattering (HRTS) system. The posterior distributions of the edge electron density profile are explored via the numerical technique and the Markov chain Monte Carlo (MCMC) samplings. See the Appendix of F. Romanelli et al., Proceedings of the 25th IAEA Fusion Energy Conference 2014, Saint Petersburg, Russia.

  5. Gaussian Processes for Data-Efficient Learning in Robotics and Control.

    Science.gov (United States)

    Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward

    2015-02-01

    Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.

  6. Disentangling Time-series Spectra with Gaussian Processes: Applications to Radial Velocity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Czekala, Ian [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305 (United States); Mandel, Kaisey S.; Andrews, Sean M.; Dittmann, Jason A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Ghosh, Sujit K. [Department of Statistics, NC State University, 2311 Stinson Drive, Raleigh, NC 27695 (United States); Montet, Benjamin T. [Department of Astronomy and Astrophysics, University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637 (United States); Newton, Elisabeth R., E-mail: iczekala@stanford.edu [Massachusetts Institute of Technology, Cambridge, MA 02138 (United States)

    2017-05-01

    Measurements of radial velocity variations from the spectroscopic monitoring of stars and their companions are essential for a broad swath of astrophysics; these measurements provide access to the fundamental physical properties that dictate all phases of stellar evolution and facilitate the quantitative study of planetary systems. The conversion of those measurements into both constraints on the orbital architecture and individual component spectra can be a serious challenge, however, especially for extreme flux ratio systems and observations with relatively low sensitivity. Gaussian processes define sampling distributions of flexible, continuous functions that are well-motivated for modeling stellar spectra, enabling proficient searches for companion lines in time-series spectra. We introduce a new technique for spectral disentangling, where the posterior distributions of the orbital parameters and intrinsic, rest-frame stellar spectra are explored simultaneously without needing to invoke cross-correlation templates. To demonstrate its potential, this technique is deployed on red-optical time-series spectra of the mid-M-dwarf binary LP661-13. We report orbital parameters with improved precision compared to traditional radial velocity analysis and successfully reconstruct the primary and secondary spectra. We discuss potential applications for other stellar and exoplanet radial velocity techniques and extensions to time-variable spectra. The code used in this analysis is freely available as an open-source Python package.

  7. Personalized Risk Scoring for Critical Care Prognosis Using Mixtures of Gaussian Processes.

    Science.gov (United States)

    Alaa, Ahmed M; Yoon, Jinsung; Hu, Scott; van der Schaar, Mihaela

    2018-01-01

    In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients' heterogeneity. The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.

  8. Joint hierarchical Gaussian process model with application to personalized prediction in medical monitoring.

    Science.gov (United States)

    Duan, Leo L; Wang, Xia; Clancy, John P; Szczesniak, Rhonda D

    2018-01-01

    A two-level Gaussian process (GP) joint model is proposed to improve personalized prediction of medical monitoring data. The proposed model is applied to jointly analyze multiple longitudinal biomedical outcomes, including continuous measurements and binary outcomes, to achieve better prediction in disease progression. At the population level of the hierarchy, two independent GPs are used to capture the nonlinear trends in both the continuous biomedical marker and the binary outcome, respectively; at the individual level, a third GP, which is shared by the longitudinal measurement model and the longitudinal binary model, induces the correlation between these two model components and strengthens information borrowing across individuals. The proposed model is particularly advantageous in personalized prediction. It is applied to the motivating clinical data on cystic fibrosis disease progression, for which lung function measurements and onset of acute respiratory events are monitored jointly throughout each patient's clinical course. The results from both the simulation studies and the cystic fibrosis data application suggest that the inclusion of the shared individual-level GPs under the joint model framework leads to important improvements in personalized disease progression prediction.

  9. Applied Gaussian Process in Optimizing Unburned Carbon Content in Fly Ash for Boiler Combustion

    Directory of Open Access Journals (Sweden)

    Chunlin Wang

    2017-01-01

    Full Text Available Recently, Gaussian Process (GP has attracted generous attention from industry. This article focuses on the application of coal fired boiler combustion and uses GP to design a strategy for reducing Unburned Carbon Content in Fly Ash (UCC-FA which is the most important indicator of boiler combustion efficiency. With getting rid of the complicated physical mechanisms, building a data-driven model as GP is an effective way for the proposed issue. Firstly, GP is used to model the relationship between the UCC-FA and boiler combustion operation parameters. The hyperparameters of GP model are optimized via Genetic Algorithm (GA. Then, served as the objective of another GA framework, the predicted UCC-FA from GP model is utilized in searching the optimal operation plan for the boiler combustion. Based on 670 sets of real data from a high capacity tangentially fired boiler, two GP models with 21 and 13 inputs, respectively, are developed. In the experimental results, the model with 21 inputs provides better prediction performance than that of the other. Choosing the results from 21-input model, the UCC-FA decreases from 2.7% to 1.7% via optimizing some of the operational parameters, which is a reasonable achievement for the boiler combustion.

  10. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    Science.gov (United States)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  11. Leadership and regressive group processes: a pilot study.

    Science.gov (United States)

    Rudden, Marie G; Twemlow, Stuart; Ackerman, Steven

    2008-10-01

    Various perspectives on leadership within the psychoanalytic, organizational and sociobiological literature are reviewed, with particular attention to research studies in these areas. Hypotheses are offered about what makes an effective leader: her ability to structure tasks well in order to avoid destructive regressions, to make constructive use of the omnipresent regressive energies in group life, and to redirect regressions when they occur. Systematic qualitative observations of three videotaped sessions each from N = 18 medical staff work groups at an urban medical center are discussed, as is the utility of a scale, the Leadership and Group Regressions Scale (LGRS), that attempts to operationalize the hypotheses. Analyzing the tapes qualitatively, it was noteworthy that at times (in N = 6 groups), the nominal leader of the group did not prove to be the actual, working leader. Quantitatively, a significant correlation was seen between leaders' LGRS scores and the group's satisfactory completion of their quantitative goals (p = 0.007) and ability to sustain the goals (p = 0.04), when the score of the person who met criteria for group leadership was used.

  12. Intercalibration and Gaussian Process Modeling of Nighttime Lights Imagery for Measuring Urbanization Trends in Africa 2000–2013

    Directory of Open Access Journals (Sweden)

    David J. Savory

    2017-07-01

    Full Text Available Sub-Saharan Africa currently has the world’s highest urban population growth rate of any continent at roughly 4.2% annually. A better understanding of the spatiotemporal dynamics of urbanization across the continent is important to a range of fields including public health, economics, and environmental sciences. Nighttime lights imagery (NTL, maintained by the National Oceanic and Atmospheric Administration, offers a unique vantage point for studying trends in urbanization. A well-documented deficiency of this dataset is the lack of intra- and inter-annual calibration between satellites, which makes the imagery unsuitable for temporal analysis in their raw format. Here we have generated an ‘intercalibrated’ time series of annual NTL images for Africa (2000–2013 by building on the widely used invariant region and quadratic regression method (IRQR. Gaussian process methods (GP were used to identify NTL latent functions independent from the temporal noise signals in the annual datasets. The corrected time series was used to explore the positive association of NTL with Gross Domestic Product (GDP and urban population (UP. Additionally, the proportion of change in ‘lit area’ occurring in urban areas was measured by defining urban agglomerations as contiguously lit pixels of >250 km2, with all other pixels being rural. For validation, the IRQR and GP time series were compared as predictors of the invariant region dataset. Root mean square error values for the GP smoothed dataset were substantially lower. Correlation of NTL with GDP and UP using GP smoothing showed significant increases in R2 over the IRQR method on both continental and national scales. Urban growth results suggested that the majority of growth in lit pixels between 2000 and 2013 occurred in rural areas. With this study, we demonstrated the effectiveness of GP to improve conventional intercalibration, used NTL to describe temporal patterns of urbanization in Africa, and

  13. Uncertainty-based simulation-optimization using Gaussian process emulation: Application to coastal groundwater management

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ketabchi, Hamed

    2017-12-01

    Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.

  14. Dynamic Socialized Gaussian Process Models for Human Behavior Prediction in a Health Social Network

    Science.gov (United States)

    Shen, Yelong; Phan, NhatHai; Xiao, Xiao; Jin, Ruoming; Sun, Junfeng; Piniewski, Brigitte; Kil, David; Dou, Dejing

    2016-01-01

    Modeling and predicting human behaviors, such as the level and intensity of physical activity, is a key to preventing the cascade of obesity and helping spread healthy behaviors in a social network. In our conference paper, we have developed a social influence model, named Socialized Gaussian Process (SGP), for socialized human behavior modeling. Instead of explicitly modeling social influence as individuals' behaviors influenced by their friends' previous behaviors, SGP models the dynamic social correlation as the result of social influence. The SGP model naturally incorporates personal behavior factor and social correlation factor (i.e., the homophily principle: Friends tend to perform similar behaviors) into a unified model. And it models the social influence factor (i.e., an individual's behavior can be affected by his/her friends) implicitly in dynamic social correlation schemes. The detailed experimental evaluation has shown the SGP model achieves better prediction accuracy compared with most of baseline methods. However, a Socialized Random Forest model may perform better at the beginning compared with the SGP model. One of the main reasons is the dynamic social correlation function is purely based on the users' sequential behaviors without considering other physical activity-related features. To address this issue, we further propose a novel “multi-feature SGP model” (mfSGP) which improves the SGP model by using multiple physical activity-related features in the dynamic social correlation learning. Extensive experimental results illustrate that the mfSGP model clearly outperforms all other models in terms of prediction accuracy and running time. PMID:27746515

  15. Gaussian process based independent analysis for temporal source separation in fMRI

    DEFF Research Database (Denmark)

    Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole

    2017-01-01

    Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts...... the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its...

  16. On some Filtration Procedure for Jump Markov Process Observed in White Gaussian Noise

    OpenAIRE

    Khas'minskii, Rafail Z.; Lazareva, Betty V.

    1992-01-01

    The importance of optimal filtration problem for Markov chain with two states observed in Gaussian white noise (GWN) for a lot of concrete technical problems is well known. The equation for a posterior probability $\\pi(t)$ of one of the states was obtained many years ago. The aim of this paper is to study a simple filtration method. It is shown that this simplified filtration is asymptotically efficient in some sense if the diffusion constant of the GWN goes to 0. Some advantages of this proc...

  17. Simulation by Slepian method of plastic displacements of Gaussian process excited multistory shear frame

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove Dalager

    2004-01-01

    The object of study is a stationary Gaussian white noise excited multi-degree-of-freedom (MDOF) linear elastic, ideal plastic, linearly damped, statically determinate oscillator with several potential elements of ideal plastic yielding. Specifically the study is exemplified for a plane multistory...... shear frame with rigid traverses where all the connecting columns except the columns in one or more of the bottom floors have finite symmetrical yield limits. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground motion...

  18. Resource theory of non-Gaussian operations

    Science.gov (United States)

    Zhuang, Quntao; Shor, Peter W.; Shapiro, Jeffrey H.

    2018-05-01

    Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is nonincreasing under the set of free superoperations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (i) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition, and all Gaussian-dilatable non-Gaussian channels; and (ii) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian dilatable. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.

  19. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    Science.gov (United States)

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  20. A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes

    Science.gov (United States)

    Martin, Rodney Alexander

    2009-01-01

    In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I

  1. Simulation of foulant bioparticle topography based on Gaussian process and its implications for interface behavior research

    Science.gov (United States)

    Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang

    2018-03-01

    Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.

  2. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    Science.gov (United States)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  3. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    Science.gov (United States)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-04-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared, X-ray and optically selected AGN - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGN are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole co-evolution and for cosmological studies.

  4. Multiple Model Soft Sensor Based on Affinity Propagation, Gaussian Process and Bayesian Committee Machine%基于仿射聚类、高斯过程和贝叶斯决策的多模型软测量建模

    Institute of Scientific and Technical Information of China (English)

    李修亮; 苏宏业; 褚健

    2009-01-01

    Presented is a multiple model soft sensing method based on Affinity Propagation (AP), Gaussian process (GP) and Bayesian committee machine (BCM). AP clustering arithmetic is used to cluster training samples according to their operating points. Then, the sub-models are estimated by Gaussian Process Regression (GPR). Finally, in order to get a global probabilistic prediction, Bayesian committee machine is used to combine the outputs of the sub-estimators. The proposed method has been applied to predict the light naphtha end point in hydrocracker fractionators. Practical applications indicate that it is useful for the online prediction of quality monitoring in chemi-cal processes.

  5. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  6. Coordinate transformation and Polynomial Chaos for the Bayesian inference of a Gaussian process with parametrized prior covariance function

    KAUST Repository

    Sraj, Ihab

    2015-10-22

    This paper addresses model dimensionality reduction for Bayesian inference based on prior Gaussian fields with uncertainty in the covariance function hyper-parameters. The dimensionality reduction is traditionally achieved using the Karhunen-Loève expansion of a prior Gaussian process assuming covariance function with fixed hyper-parameters, despite the fact that these are uncertain in nature. The posterior distribution of the Karhunen-Loève coordinates is then inferred using available observations. The resulting inferred field is therefore dependent on the assumed hyper-parameters. Here, we seek to efficiently estimate both the field and covariance hyper-parameters using Bayesian inference. To this end, a generalized Karhunen-Loève expansion is derived using a coordinate transformation to account for the dependence with respect to the covariance hyper-parameters. Polynomial Chaos expansions are employed for the acceleration of the Bayesian inference using similar coordinate transformations, enabling us to avoid expanding explicitly the solution dependence on the uncertain hyper-parameters. We demonstrate the feasibility of the proposed method on a transient diffusion equation by inferring spatially-varying log-diffusivity fields from noisy data. The inferred profiles were found closer to the true profiles when including the hyper-parameters’ uncertainty in the inference formulation.

  7. A novel construction of complex-valued Gaussian processes with arbitrary spectral densities and its application to excitation energy transfer.

    Science.gov (United States)

    Chen, Xin; Cao, Jianshu; Silbey, Robert J

    2013-06-14

    The recent experimental discoveries about excitation energy transfer (EET) in light harvesting antenna (LHA) attract a lot of interest. As an open non-equilibrium quantum system, the EET demands more rigorous theoretical framework to understand the interaction between system and environment and therein the evolution of reduced density matrix. A phonon is often used to model the fluctuating environment and convolutes the reduced quantum system temporarily. In this paper, we propose a novel way to construct complex-valued Gaussian processes to describe thermal quantum phonon bath exactly by converting the convolution of influence functional into the time correlation of complex Gaussian random field. Based on the construction, we propose a rigorous and efficient computational method, the covariance decomposition and conditional propagation scheme, to simulate the temporarily entangled reduced system. The new method allows us to study the non-Markovian effect without perturbation under the influence of different spectral densities of the linear system-phonon coupling coefficients. Its application in the study of EET in the Fenna-Matthews-Olson model Hamiltonian under four different spectral densities is discussed. Since the scaling of our algorithm is linear due to its Monte Carlo nature, the future application of the method for large LHA systems is attractive. In addition, this method can be used to study the effect of correlated initial condition on the reduced dynamics in the future.

  8. Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves

    Science.gov (United States)

    Wang, Qiang; Ma, Shuxian; Yue, Dong

    2018-04-01

    Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.

  9. [A SAS marco program for batch processing of univariate Cox regression analysis for great database].

    Science.gov (United States)

    Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin

    2015-02-01

    To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.

  10. Transient simulation of regression rate on thrust regulation process in hybrid rocket motor

    Directory of Open Access Journals (Sweden)

    Tian Hui

    2014-12-01

    Full Text Available The main goal of this paper is to study the characteristics of regression rate of solid grain during thrust regulation process. For this purpose, an unsteady numerical model of regression rate is established. Gas–solid coupling is considered between the solid grain surface and combustion gas. Dynamic mesh is used to simulate the regression process of the solid fuel surface. Based on this model, numerical simulations on a H2O2/HTPB (hydroxyl-terminated polybutadiene hybrid motor have been performed in the flow control process. The simulation results show that under the step change of the oxidizer mass flow rate condition, the regression rate cannot reach a stable value instantly because the flow field requires a short time period to adjust. The regression rate increases with the linear gain of oxidizer mass flow rate, and has a higher slope than the relative inlet function of oxidizer flow rate. A shorter regulation time can cause a higher regression rate during regulation process. The results also show that transient calculation can better simulate the instantaneous regression rate in the operation process.

  11. Optimization of nonlinear, non-Gaussian Bayesian filtering for diagnosis and prognosis of monotonic degradation processes

    Science.gov (United States)

    Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.

    2018-05-01

    The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.

  12. Stable non-Gaussian self-similar processes with stationary increments

    CERN Document Server

    Pipiras, Vladas

    2017-01-01

    This book provides a self-contained presentation on the structure of a large class of stable processes, known as self-similar mixed moving averages. The authors present a way to describe and classify these processes by relating them to so-called deterministic flows. The first sections in the book review random variables, stochastic processes, and integrals, moving on to rigidity and flows, and finally ending with mixed moving averages and self-similarity. In-depth appendices are also included. This book is aimed at graduate students and researchers working in probability theory and statistics.

  13. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

    Science.gov (United States)

    2004-07-07

    ARTICLE IN PRESSPhysica A 347 (2005) 268–2880378-4371/$ - doi:10.1016/j Correspo E-mail adwww.elsevier.com/locate/physaNon- Poisson processes : regression...05.40.a; 89.75.k; 02.50.Ey Keywords: Stochastic processes; Non- Poisson processes ; Liouville and Liouville-like equations; Correlation function...which is not legitimate with renewal non- Poisson processes , is a correct property if the deviation from the exponential relaxation is obtained by time

  14. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  15. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    International Nuclear Information System (INIS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Shen, Aiguo; Hu, Jiming; Jia, Jun

    2013-01-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory. (paper)

  16. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    Science.gov (United States)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  17. Representation of Gaussian semimartingales with applications to the covariance function

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2010-01-01

    stationary Gaussian semimartingales and their canonical decomposition. Thirdly, we give a new characterization of the covariance function of Gaussian semimartingales, which enable us to characterize the class of martingales and the processes of bounded variation among the Gaussian semimartingales. We...

  18. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction

    International Nuclear Information System (INIS)

    Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De

    2008-01-01

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry

  19. Reforging the Wedding Ring: Exploring a Semi-Artificial Model of Population for the United Kingdom with Gaussian process emulators

    Directory of Open Access Journals (Sweden)

    Viet Dung Cao

    2013-10-01

    Full Text Available Background: We extend the "Wedding Ring‟ agent-based model of marriage formation to include some empirical information on the natural population change for the United Kingdom together with behavioural explanations that drive the observed nuptiality trends. Objective: We propose a method to explore statistical properties of agent-based demographic models. By coupling rule-based explanations driving the agent-based model with observed data we wish to bring agent-based modelling and demographic analysis closer together. Methods: We present a Semi-Artificial Model of Population, which aims to bridge demographic micro-simulation and agent-based traditions. We then utilise a Gaussian process emulator - a statistical model of the base model - to analyse the impact of selected model parameters on two key model outputs: population size and share of married agents. A sensitivity analysis is attempted, aiming to assess the relative importance of different inputs. Results: The resulting multi-state model of population dynamics has enhanced predictive capacity as compared to the original specification of the Wedding Ring, but there are some trade-offs between the outputs considered. The sensitivity analysis allows identification of the most important parameters in the modelled marriage formation process. Conclusions: The proposed methods allow for generating coherent, multi-level agent-based scenarios aligned with some aspects of empirical demographic reality. Emulators permit a statistical analysis of their properties and help select plausible parameter values. Comments: Given non-linearities in agent-based models such as the Wedding Ring, and the presence of feedback loops, the uncertainty in the model may not be directly computable by using traditional statistical methods. The use of statistical emulators offers a way forward.

  20. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  1. Time series modeling by a regression approach based on a latent process.

    Science.gov (United States)

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  2. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    Science.gov (United States)

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we

  3. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    Directory of Open Access Journals (Sweden)

    Christophe Coupé

    2018-04-01

    Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships

  4. Application of Multivariate Adaptive Regression Splines to Sheet Metal Bending Process for Springback Compensation

    Directory of Open Access Journals (Sweden)

    Dilan Rasim Aşkın

    2016-01-01

    Full Text Available An intelligent regression technique is applied for sheet metal bending processes to improve bending performance. This study is a part of another extensive study, automated sheet bending assistance for press brakes. Data related to material properties of sheet metal is collected in an online manner and fed to an intelligent system for determining the most accurate punch displacement without any offline iteration or calibration. The overall system aims to reduce the production time while increasing the performance of press brakes.

  5. [Optimization of processing technology for semen cuscuta by uniform and regression analysis].

    Science.gov (United States)

    Li, Chun-yu; Luo, Hui-yu; Wang, Shu; Zhai, Ya-nan; Tian, Shu-hui; Zhang, Dan-shen

    2011-02-01

    To optimize the best preparation technology for the contains of total flavornoids, polysaccharides, the percentage of water and alcohol-soluble components in Semen Cuscuta herb processing. UV-spectrophotometry was applied to determine the contains of total flavornoids and polysaccharides, which were extracted from Semen Cuscuta. And the processing was optimized by the way of uniform design and contour map. The best preparation technology was satisfied with some conditions as follows: baking temperature 150 degrees C, baking time 140 seconds. The regression models are notable and reasonable, which can forecast results precisely.

  6. Regression tools for CO2 inversions: application of a shrinkage estimator to process attribution

    International Nuclear Information System (INIS)

    Shaby, Benjamin A.; Field, Christopher B.

    2006-01-01

    In this study we perform an atmospheric inversion based on a shrinkage estimator. This method is used to estimate surface fluxes of CO 2 , first partitioned according to constituent geographic regions, and then according to constituent processes that are responsible for the total flux. Our approach differs from previous approaches in two important ways. The first is that the technique of linear Bayesian inversion is recast as a regression problem. Seen as such, standard regression tools are employed to analyse and reduce errors in the resultant estimates. A shrinkage estimator, which combines standard ridge regression with the linear 'Bayesian inversion' model, is introduced. This method introduces additional bias into the model with the aim of reducing variance such that errors are decreased overall. Compared with standard linear Bayesian inversion, the ridge technique seems to reduce both flux estimation errors and prediction errors. The second divergence from previous studies is that instead of dividing the world into geographically distinct regions and estimating the CO 2 flux in each region, the flux space is divided conceptually into processes that contribute to the total global flux. Formulating the problem in this manner adds to the interpretability of the resultant estimates and attempts to shed light on the problem of attributing sources and sinks to their underlying mechanisms

  7. How to Address the Data Quality Issues in Regression Models: A Guided Process for Data Cleaning

    Directory of Open Access Journals (Sweden)

    David Camilo Corrales

    2018-04-01

    Full Text Available Today, data availability has gone from scarce to superabundant. Technologies like IoT, trends in social media and the capabilities of smart-phones are producing and digitizing lots of data that was previously unavailable. This massive increase of data creates opportunities to gain new business models, but also demands new techniques and methods of data quality in knowledge discovery, especially when the data comes from different sources (e.g., sensors, social networks, cameras, etc.. The data quality process of the data set proposes conclusions about the information they contain. This is increasingly done with the aid of data cleaning approaches. Therefore, guaranteeing a high data quality is considered as the primary goal of the data scientist. In this paper, we propose a process for data cleaning in regression models (DC-RM. The proposed data cleaning process is evaluated through a real datasets coming from the UCI Repository of Machine Learning Databases. With the aim of assessing the data cleaning process, the dataset that is cleaned by DC-RM was used to train the same regression models proposed by the authors of UCI datasets. The results achieved by the trained models with the dataset produced by DC-RM are better than or equal to that presented by the datasets’ authors.

  8. Probabilistic analysis and fatigue damage assessment of offshore mooring system due to non-Gaussian bimodal tension processes

    Science.gov (United States)

    Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng

    2017-08-01

    Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.

  9. Analysis of the Covered Electrode Welding Process Stability on the Basis of Linear Regression Equation

    Directory of Open Access Journals (Sweden)

    Słania J.

    2014-10-01

    Full Text Available The article presents the process of production of coated electrodes and their welding properties. The factors concerning the welding properties and the currently applied method of assessing are given. The methodology of the testing based on the measuring and recording of instantaneous values of welding current and welding arc voltage is discussed. Algorithm for creation of reference data base of the expert system is shown, aiding the assessment of covered electrodes welding properties. The stability of voltage–current characteristics was discussed. Statistical factors of instantaneous values of welding current and welding arc voltage waveforms used for determining of welding process stability are presented. The results of coated electrodes welding properties are compared. The article presents the results of linear regression as well as the impact of the independent variables on the welding process performance. Finally the conclusions drawn from the research are given.

  10. Statistical Downscaling Output GCM Modeling with Continuum Regression and Pre-Processing PCA Approach

    Directory of Open Access Journals (Sweden)

    Sutikno Sutikno

    2010-08-01

    Full Text Available One of the climate models used to predict the climatic conditions is Global Circulation Models (GCM. GCM is a computer-based model that consists of different equations. It uses numerical and deterministic equation which follows the physics rules. GCM is a main tool to predict climate and weather, also it uses as primary information source to review the climate change effect. Statistical Downscaling (SD technique is used to bridge the large-scale GCM with a small scale (the study area. GCM data is spatial and temporal data most likely to occur where the spatial correlation between different data on the grid in a single domain. Multicollinearity problems require the need for pre-processing of variable data X. Continuum Regression (CR and pre-processing with Principal Component Analysis (PCA methods is an alternative to SD modelling. CR is one method which was developed by Stone and Brooks (1990. This method is a generalization from Ordinary Least Square (OLS, Principal Component Regression (PCR and Partial Least Square method (PLS methods, used to overcome multicollinearity problems. Data processing for the station in Ambon, Pontianak, Losarang, Indramayu and Yuntinyuat show that the RMSEP values and R2 predict in the domain 8x8 and 12x12 by uses CR method produces results better than by PCR and PLS.

  11. Data processing for potentiometric precipitation titration of mixtures of isovalent ions by linear regression analysis

    International Nuclear Information System (INIS)

    Mar'yanov, B.M.; Shumar, S.V.; Gavrilenko, M.A.

    1994-01-01

    A method for the computer processing of the curves of potentiometric differential titration using the precipitation reactions is developed. This method is based on transformation of the titration curve into a line of multiphase regression, whose parameters determine the equivalence points and the solubility products of the formed precipitates. The computational algorithm is tested using experimental curves for the titration of solutions containing Hg(2) and Cd(2) by the solution of sodium diethyldithiocarbamate. The random errors (RSD) for the titration of 1x10 -4 M solutions are in the range of 3-6%. 7 refs.; 2 figs.; 1 tab

  12. Endpoint in plasma etch process using new modified w-multivariate charts and windowed regression

    Science.gov (United States)

    Zakour, Sihem Ben; Taleb, Hassen

    2017-09-01

    Endpoint detection is very important undertaking on the side of getting a good understanding and figuring out if a plasma etching process is done in the right way, especially if the etched area is very small (0.1%). It truly is a crucial part of supplying repeatable effects in every single wafer. When the film being etched has been completely cleared, the endpoint is reached. To ensure the desired device performance on the produced integrated circuit, the high optical emission spectroscopy (OES) sensor is employed. The huge number of gathered wavelengths (profiles) is then analyzed and pre-processed using a new proposed simple algorithm named Spectra peak selection (SPS) to select the important wavelengths, then we employ wavelet analysis (WA) to enhance the performance of detection by suppressing noise and redundant information. The selected and treated OES wavelengths are then used in modified multivariate control charts (MEWMA and Hotelling) for three statistics (mean, SD and CV) and windowed polynomial regression for mean. The employ of three aforementioned statistics is motivated by controlling mean shift, variance shift and their ratio (CV) if both mean and SD are not stable. The control charts show their performance in detecting endpoint especially W-mean Hotelling chart and the worst result is given by CV statistic. As the best detection of endpoint is given by the W-Hotelling mean statistic, this statistic will be used to construct a windowed wavelet Hotelling polynomial regression. This latter can only identify the window containing endpoint phenomenon.

  13. Regressive research: The pitfalls of post hoc data selection in the study of unconscious mental processes.

    Science.gov (United States)

    Shanks, David R

    2017-06-01

    Many studies of unconscious processing involve comparing a performance measure (e.g., some assessment of perception or memory) with an awareness measure (such as a verbal report or a forced-choice response) taken either concurrently or separately. Unconscious processing is inferred when above-chance performance is combined with null awareness. Often, however, aggregate awareness is better than chance, and data analysis therefore employs a form of extreme group analysis focusing post hoc on participants, trials, or items where awareness is absent or at chance. The pitfalls of this analytic approach are described with particular reference to recent research on implicit learning and subliminal perception. Because of regression to the mean, the approach can mislead researchers into erroneous conclusions concerning unconscious influences on behavior. Recommendations are made about future use of post hoc selection in research on unconscious cognition.

  14. FUZZY REGRESSION MODEL TO PREDICT THE BEAD GEOMETRY IN THE ROBOTIC WELDING PROCESS

    Institute of Scientific and Technical Information of China (English)

    B.S. Sung; I.S. Kim; Y. Xue; H.H. Kim; Y.H. Cha

    2007-01-01

    Recently, there has been a rapid development in computer technology, which has in turn led todevelop the fully robotic welding system using artificial intelligence (AI) technology. However, therobotic welding system has not been achieved due to difficulties of the mathematical model andsensor technologies. The possibilities of the fuzzy regression method to predict the bead geometry,such as bead width, bead height, bead penetration and bead area in the robotic GMA (gas metalarc) welding process is presented. The approach, a well-known method to deal with the problemswith a high degree of fuzziness, is used to build the relationship between four process variablesand the four quality characteristics, respectively. Using these models, the proper prediction of theprocess variables for obtaining the optimal bead geometry can be determined.

  15. Limit theorems for functionals of Gaussian vectors

    Institute of Scientific and Technical Information of China (English)

    Hongshuai DAI; Guangjun SHEN; Lingtao KONG

    2017-01-01

    Operator self-similar processes,as an extension of self-similar processes,have been studied extensively.In this work,we study limit theorems for functionals of Gaussian vectors.Under some conditions,we determine that the limit of partial sums of functionals of a stationary Gaussian sequence of random vectors is an operator self-similar process.

  16. On the dependence structure of Gaussian queues

    NARCIS (Netherlands)

    Es-Saghouani, A.; Mandjes, M.R.H.

    2009-01-01

    In this article we study Gaussian queues (that is, queues fed by Gaussian processes, such as fractional Brownian motion (fBm) and the integrated Ornstein-Uhlenbeck (iOU) process), with a focus on the dependence structure of the workload process. The main question is to what extent does the workload

  17. Non-gaussian turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hoejstrup, J [NEG Micon Project Development A/S, Randers (Denmark); Hansen, K S [Denmarks Technical Univ., Dept. of Energy Engineering, Lyngby (Denmark); Pedersen, B J [VESTAS Wind Systems A/S, Lem (Denmark); Nielsen, M [Risoe National Lab., Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    The pdf`s of atmospheric turbulence have somewhat wider tails than a Gaussian, especially regarding accelerations, whereas velocities are close to Gaussian. This behaviour is being investigated using data from a large WEB-database in order to quantify the amount of non-Gaussianity. Models for non-Gaussian turbulence have been developed, by which artificial turbulence can be generated with specified distributions, spectra and cross-correlations. The artificial time series will then be used in load models and the resulting loads in the Gaussian and the non-Gaussian cases will be compared. (au)

  18. Implementation and use of Gaussian process meta model for sensitivity analysis of numerical models: application to a hydrogeological transport computer code

    International Nuclear Information System (INIS)

    Marrel, A.

    2008-01-01

    In the studies of environmental transfer and risk assessment, numerical models are used to simulate, understand and predict the transfer of pollutant. These computer codes can depend on a high number of uncertain input parameters (geophysical variables, chemical parameters, etc.) and can be often too computer time expensive. To conduct uncertainty propagation studies and to measure the importance of each input on the response variability, the computer code has to be approximated by a meta model which is build on an acceptable number of simulations of the code and requires a negligible calculation time. We focused our research work on the use of Gaussian process meta model to make the sensitivity analysis of the code. We proposed a methodology with estimation and input selection procedures in order to build the meta model in the case of a high number of inputs and with few simulations available. Then, we compared two approaches to compute the sensitivity indices with the meta model and proposed an algorithm to build prediction intervals for these indices. Afterwards, we were interested in the choice of the code simulations. We studied the influence of different sampling strategies on the predictiveness of the Gaussian process meta model. Finally, we extended our statistical tools to a functional output of a computer code. We combined a decomposition on a wavelet basis with the Gaussian process modelling before computing the functional sensitivity indices. All the tools and statistical methodologies that we developed were applied to the real case of a complex hydrogeological computer code, simulating radionuclide transport in groundwater. (author) [fr

  19. Prediction of temperature and HAZ in thermal-based processes with Gaussian heat source by a hybrid GA-ANN model

    Science.gov (United States)

    Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali

    2018-02-01

    Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.

  20. Asymmetrical Responses of Ecosystem Processes to Positive Versus Negative Precipitation Extremes: a Replicated Regression Experimental Approach

    Science.gov (United States)

    Felton, A. J.; Smith, M. D.

    2016-12-01

    Heightened climatic variability due to atmospheric warming is forecast to increase the frequency and severity of climate extremes. In particular, changes to interannual variability in precipitation, characterized by increases in extreme wet and dry years, are likely to impact virtually all terrestrial ecosystem processes. However, to date experimental approaches have yet to explicitly test how ecosystem processes respond to multiple levels of climatic extremity, limiting our understanding of how ecosystems will respond to forecast increases in the magnitude of climate extremes. Here we report the results of a replicated regression experimental approach, in which we imposed 9 and 11 levels of growing season precipitation amount and extremity in mesic grassland during 2015 and 2016, respectively. Each level corresponded to a specific percentile of the long-term record, which produced a large gradient of soil moisture conditions that ranged from extreme wet to extreme dry. In both 2015 and 2016, asymptotic responses to water availability were observed for soil respiration. This asymmetry was driven in part by transitions between soil moisture versus temperature constraints on respiration as conditions became increasingly dry versus increasingly wet. In 2015, aboveground net primary production (ANPP) exhibited asymmetric responses to precipitation that largely mirrored those of soil respiration. In total, our results suggest that in this mesic ecosystem, these two carbon cycle processes were more sensitive to extreme drought than to extreme wet years. Future work will assess ANPP responses for 2016, soil nutrient supply and physiological responses of the dominant plant species. Future efforts are needed to compare our findings across a diverse array of ecosystem types, and in particular how the timing and magnitude of precipitation events may modify the response of ecosystem processes to increasing magnitudes of precipitation extremes.

  1. An integrated study of surface roughness in EDM process using regression analysis and GSO algorithm

    Science.gov (United States)

    Zainal, Nurezayana; Zain, Azlan Mohd; Sharif, Safian; Nuzly Abdull Hamed, Haza; Mohamad Yusuf, Suhaila

    2017-09-01

    The aim of this study is to develop an integrated study of surface roughness (Ra) in the die-sinking electrical discharge machining (EDM) process of Ti-6AL-4V titanium alloy with positive polarity of copper-tungsten (Cu-W) electrode. Regression analysis and glowworm swarm optimization (GSO) algorithm were considered for modelling and optimization process. Pulse on time (A), pulse off time (B), peak current (C) and servo voltage (D) were selected as the machining parameters with various levels. The experiments have been conducted based on the two levels of full factorial design with an added center point design of experiments (DOE). Moreover, mathematical models with linear and 2 factor interaction (2FI) effects of the parameters chosen were developed. The validity test of the fit and the adequacy of the developed mathematical models have been carried out by using analysis of variance (ANOVA) and F-test. The statistical analysis showed that the 2FI model outperformed with the most minimal value of Ra compared to the linear model and experimental result.

  2. Gráfico de controle de regressão aplicado na monitoração de processos Regression control chart applied in process monitoring

    Directory of Open Access Journals (Sweden)

    Luciane Flores Jacobi

    2002-01-01

    Full Text Available Esta pesquisa tem por objetivo empregar o gráfico de controle de regressão, como ferramenta de controle estatístico, para monitorar processos produtivos, onde uma variável de estado, que seja de interesse, possa ser expressa como função de uma variável de controle. Existem vários estudos sobre o controle de variáveis em processos produtivos, mas, na maioria das vezes, são em relação ao controle de cada variável, separadamente, não podendo ser utilizados para um estudo comparativo. Esta pesquisa, portanto, apresenta uma técnica eficiente no controle simultâneo de variáveis correlacionadas.The main purpose of this research is to apply the regression control chart as tool of statistical control to monitor productive processes, where a state variable that is of interest can be expressed as function of a control variable. Several studies exist to control variables in productive processes, but most of time they are separately in relation to the control of each variable, and however not could be used for a comparative study. This research, therefore, it presents an efficient technique to control simultaneous by correlated variables.

  3. Some continual integrals from gaussian forms

    International Nuclear Information System (INIS)

    Mazmanishvili, A.S.

    1985-01-01

    The result summary of continual integration of gaussian functional type is given. The summary contains 124 continual integrals which are the mathematical expectation of the corresponding gaussian form by the continuum of random trajectories of four types: real-valued Ornstein-Uhlenbeck process, Wiener process, complex-valued Ornstein-Uhlenbeck process and the stochastic harmonic one. The summary includes both the known continual integrals and the unpublished before integrals. Mathematical results of the continual integration carried in the work may be applied in the problem of the theory of stochastic process, approaching to the finding of mean from gaussian forms by measures generated by the pointed stochastic processes

  4. The process and utility of classification and regression tree methodology in nursing research.

    Science.gov (United States)

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-06-01

    This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  5. Shedding new light on Gaussian harmonic analysis

    NARCIS (Netherlands)

    Teuwen, J.J.B.

    2016-01-01

    This dissertation consists out of two rather disjoint parts. One part concerns some results on Gaussian harmonic analysis and the other on an optimization problem in optics. In the first part we study the Ornstein–Uhlenbeck process with respect to the Gaussian measure. We focus on two areas. One is

  6. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    Science.gov (United States)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  7. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  8. Gaussian entanglement revisited

    Science.gov (United States)

    Lami, Ludovico; Serafini, Alessio; Adesso, Gerardo

    2018-02-01

    We present a novel approach to the separability problem for Gaussian quantum states of bosonic continuous variable systems. We derive a simplified necessary and sufficient separability criterion for arbitrary Gaussian states of m versus n modes, which relies on convex optimisation over marginal covariance matrices on one subsystem only. We further revisit the currently known results stating the equivalence between separability and positive partial transposition (PPT) for specific classes of Gaussian states. Using techniques based on matrix analysis, such as Schur complements and matrix means, we then provide a unified treatment and compact proofs of all these results. In particular, we recover the PPT-separability equivalence for: (i) Gaussian states of 1 versus n modes; and (ii) isotropic Gaussian states. In passing, we also retrieve (iii) the recently established equivalence between separability of a Gaussian state and and its complete Gaussian extendability. Our techniques are then applied to progress beyond the state of the art. We prove that: (iv) Gaussian states that are invariant under partial transposition are necessarily separable; (v) the PPT criterion is necessary and sufficient for separability for Gaussian states of m versus n modes that are symmetric under the exchange of any two modes belonging to one of the parties; and (vi) Gaussian states which remain PPT under passive optical operations can not be entangled by them either. This is not a foregone conclusion per se (since Gaussian bound entangled states do exist) and settles a question that had been left unanswered in the existing literature on the subject. This paper, enjoyable by both the quantum optics and the matrix analysis communities, overall delivers technical and conceptual advances which are likely to be useful for further applications in continuous variable quantum information theory, beyond the separability problem.

  9. Statistically tuned Gaussian background subtraction technique for ...

    Indian Academy of Sciences (India)

    temporal median method and mixture of Gaussian model and performance evaluation ... to process the videos captured by unmanned aerial vehicle (UAV). ..... The output is obtained by simulation using MATLAB 2010 in a standalone PC with ...

  10. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process.

    Science.gov (United States)

    Wilson, Lorna R M; Hopcraft, Keith I

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  11. Periodicity in the autocorrelation function as a mechanism for regularly occurring zero crossings or extreme values of a Gaussian process

    Science.gov (United States)

    Wilson, Lorna R. M.; Hopcraft, Keith I.

    2017-12-01

    The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.

  12. Regression Modeling of EDM Process for AISI D2 Tool Steel with RSM

    Directory of Open Access Journals (Sweden)

    Shakir M. Mousa

    2018-01-01

    Full Text Available In this paper, Response Surface Method (RSM is utilized to carry out an investigation of the impact of input parameters: electrode type (E.T. [Gr, Cu and CuW], pulse duration of current (Ip, pulse duration on time (Ton, and pulse duration off time (Toff on the surface finish in EDM operation. To approximate and concentrate the suggested second- order regression model is generally accepted for Surface Roughness Ra, a Central Composite Design (CCD is utilized for evaluating the model constant coefficients of the input parameters on Surface Roughness (Ra. Examinations were performed on AISI D2 tool steel. The important coefficients are gotten by achieving successfully an Analysis of Variance (ANOVA at the 5 % confidence interval. The outcomes discover that Surface Roughness (Ra is much more impacted by E.T., Ton, Toff, Ip and little of their interactions action or influence. To predict the average Surface Roughness (Ra, a mathematical regression model was developed. Furthermore, for saving in time, the created model could be utilized for the choice of the high levels in the EDM procedure. The model adequacy was extremely agreeable as the constant Coefficient of Determination (R2 is observed to be 99.72% and adjusted R2-measurement (R2adj 99.60%.

  13. Semiparametric inference on the fractal index of Gaussian and conditionally Gaussian time series data

    DEFF Research Database (Denmark)

    Bennedsen, Mikkel

    Using theory on (conditionally) Gaussian processes with stationary increments developed in Barndorff-Nielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on the fractal index, α, of a time series. Our setup encompasses a large class of Gaussian...

  14. Comparison of autoregressive (AR) strategy with that of regression approach for determining ozone layer depletion as a physical process

    International Nuclear Information System (INIS)

    Yousufzai, M.A.K; Aansari, M.R.K.; Quamar, J.; Iqbal, J.; Hussain, M.A.

    2010-01-01

    This communication presents the development of a comprehensive characterization of ozone layer depletion (OLD) phenomenon as a physical process in the form of mathematical models that comprise the usual regression, multiple or polynomial regression and stochastic strategy. The relevance of these models has been illuminated using predicted values of different parameters under a changing environment. The information obtained from such analysis can be employed to alter the possible factors and variables to achieve optimum performance. This kind of analysis initiates a study towards formulating the phenomenon of OLD as a physical process with special reference to the stratospheric region of Pakistan. The data presented here establishes that the Auto regressive (AR) nature of modeling OLD as a physical process is an appropriate scenario rather than using usual regression. The data reported in literature suggest quantitatively the OLD is occurring in our region. For this purpose we have modeled this phenomenon using the data recorded at the Geophysical Centre Quetta during the period 1960-1999. The predictions made by this analysis are useful for public, private and other relevant organizations. (author)

  15. Vortices in Gaussian beams

    CSIR Research Space (South Africa)

    Roux, FS

    2009-01-01

    Full Text Available , t0)} = P(du, dv) {FR{g(u, v, t0)}} Replacement: u→ du = t− t0 i2 ∂ ∂u′ v → dv = t− t0 i2 ∂ ∂v′ CSIR National Laser Centre – p.13/30 Differentiation i.s.o integration Evaluate the integral over the Gaussian beam (once and for all). Then, instead... . Gaussian beams with vortex dipoles CSIR National Laser Centre – p.2/30 Gaussian beam notation Gaussian beam in normalised coordinates: g(u, v, t) = exp ( −u 2 + v2 1− it ) u = xω0 v = yω0 t = zρ ρ = piω20 λ ω0 — 1/e2 beam waist radius; ρ— Rayleigh range ω ω...

  16. Gaussian operations and privacy

    International Nuclear Information System (INIS)

    Navascues, Miguel; Acin, Antonio

    2005-01-01

    We consider the possibilities offered by Gaussian states and operations for two honest parties, Alice and Bob, to obtain privacy against a third eavesdropping party, Eve. We first extend the security analysis of the protocol proposed in [Navascues et al. Phys. Rev. Lett. 94, 010502 (2005)]. Then, we prove that a generalized version of this protocol does not allow one to distill a secret key out of bound entangled Gaussian states

  17. Ultrasound-enhanced bioscouring of greige cotton: regression analysis of process factors

    Science.gov (United States)

    Ultrasound-enhanced bioscouring process factors for greige cotton fabric are examined using custom experimental design utilizing statistical principles. An equation is presented which predicts bioscouring performance based upon percent reflectance values obtained from UV-Vis measurements of rutheniu...

  18. Laser Raman detection of platelets for early and differential diagnosis of Alzheimer’s disease based on an adaptive Gaussian process classification algorithm

    International Nuclear Information System (INIS)

    Luo, Yusheng; Du, Z W; Yang, Y J; Chen, P; Wang, X H; Cheng, Y; Peng, J; Shen, A G; Hu, J M; Tian, Q; Shang, X L; Liu, Z C; Yao, X Q; Wang, J Z

    2013-01-01

    Early and differential diagnosis of Alzheimer’s disease (AD) has puzzled many clinicians. In this work, laser Raman spectroscopy (LRS) was developed to diagnose AD from platelet samples from AD transgenic mice and non-transgenic controls of different ages. An adaptive Gaussian process (GP) classification algorithm was used to re-establish the classification models of early AD, advanced AD and the control group with just two features and the capacity for noise reduction. Compared with the previous multilayer perceptron network method, the GP showed much better classification performance with the same feature set. Besides, spectra of platelets isolated from AD and Parkinson’s disease (PD) mice were also discriminated. Spectral data from 4 month AD (n = 39) and 12 month AD (n = 104) platelets, as well as control data (n = 135), were collected. Prospective application of the algorithm to the data set resulted in a sensitivity of 80%, a specificity of about 100% and a Matthews correlation coefficient of 0.81. Samples from PD (n = 120) platelets were also collected for differentiation from 12 month AD. The results suggest that platelet LRS detection analysis with the GP appears to be an easier and more accurate method than current ones for early and differential diagnosis of AD. (paper)

  19. Comparison of adaptive neuro-fuzzy inference system (ANFIS) and Gaussian processes for machine learning (GPML) algorithms for the prediction of skin temperature in lower limb prostheses.

    Science.gov (United States)

    Mathur, Neha; Glesk, Ivan; Buis, Arjan

    2016-10-01

    Monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used impeding the required consistent positioning of the temperature sensors during donning and doffing. Predicting the in-socket residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. In this work, we propose to implement an adaptive neuro fuzzy inference strategy (ANFIS) to predict the in-socket residual limb temperature. ANFIS belongs to the family of fused neuro fuzzy system in which the fuzzy system is incorporated in a framework which is adaptive in nature. The proposed method is compared to our earlier work using Gaussian processes for machine learning. By comparing the predicted and actual data, results indicate that both the modeling techniques have comparable performance metrics and can be efficiently used for non-invasive temperature monitoring. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  1. Advanced signal processing based on support vector regression for lidar applications

    Science.gov (United States)

    Gelfusa, M.; Murari, A.; Malizia, A.; Lungaroni, M.; Peluso, E.; Parracino, S.; Talebzadeh, S.; Vega, J.; Gaudio, P.

    2015-10-01

    The LIDAR technique has recently found many applications in atmospheric physics and remote sensing. One of the main issues, in the deployment of systems based on LIDAR, is the filtering of the backscattered signal to alleviate the problems generated by noise. Improvement in the signal to noise ratio is typically achieved by averaging a quite large number (of the order of hundreds) of successive laser pulses. This approach can be effective but presents significant limitations. First of all, it implies a great stress on the laser source, particularly in the case of systems for automatic monitoring of large areas for long periods. Secondly, this solution can become difficult to implement in applications characterised by rapid variations of the atmosphere, for example in the case of pollutant emissions, or by abrupt changes in the noise. In this contribution, a new method for the software filtering and denoising of LIDAR signals is presented. The technique is based on support vector regression. The proposed new method is insensitive to the statistics of the noise and is therefore fully general and quite robust. The developed numerical tool has been systematically compared with the most powerful techniques available, using both synthetic and experimental data. Its performances have been tested for various statistical distributions of the noise and also for other disturbances of the acquired signal such as outliers. The competitive advantages of the proposed method are fully documented. The potential of the proposed approach to widen the capability of the LIDAR technique, particularly in the detection of widespread smoke, is discussed in detail.

  2. Using Regression to Measure Holistic Face Processing Reveals a Strong Link with Face Recognition Ability

    Science.gov (United States)

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…

  3. DETERMINATION AND OPTIMIZATION OF CYLINDRICAL GRINDING PROCESS PARAMETERS USING TAGUCHI METHOD AND REGRESSION ANALYSIS

    OpenAIRE

    M.Janardhan; Dr.A.Gopala Krishna

    2011-01-01

    Cylindrical grinding is one of the important metal cutting processes used extensively in the finishing operations. Metal removal rate and surface finish are the important out put responses in the production with respect to quantity and quality respectively. The Experiments are conducted on CNC cylindrical grinding machine with L9 Orthogonal array with input machining variables as work speed, feed rate and depth of cut. Empirical models are developed using design of experiments and response su...

  4. Nonclassicality by Local Gaussian Unitary Operations for Gaussian States

    Directory of Open Access Journals (Sweden)

    Yangyang Wang

    2018-04-01

    Full Text Available A measure of nonclassicality N in terms of local Gaussian unitary operations for bipartite Gaussian states is introduced. N is a faithful quantum correlation measure for Gaussian states as product states have no such correlation and every non product Gaussian state contains it. For any bipartite Gaussian state ρ A B , we always have 0 ≤ N ( ρ A B < 1 , where the upper bound 1 is sharp. An explicit formula of N for ( 1 + 1 -mode Gaussian states and an estimate of N for ( n + m -mode Gaussian states are presented. A criterion of entanglement is established in terms of this correlation. The quantum correlation N is also compared with entanglement, Gaussian discord and Gaussian geometric discord.

  5. Phase statistics in non-Gaussian scattering

    International Nuclear Information System (INIS)

    Watson, Stephen M; Jakeman, Eric; Ridley, Kevin D

    2006-01-01

    Amplitude weighting can improve the accuracy of frequency measurements in signals corrupted by multiplicative speckle noise. When the speckle field constitutes a circular complex Gaussian process, the optimal function of amplitude weighting is provided by the field intensity, corresponding to the intensity-weighted phase derivative statistic. In this paper, we investigate the phase derivative and intensity-weighted phase derivative returned from a two-dimensional random walk, which constitutes a generic scattering model capable of producing both Gaussian and non-Gaussian fluctuations. Analytical results are developed for the correlation properties of the intensity-weighted phase derivative, as well as limiting probability densities of the scattered field. Numerical simulation is used to generate further probability densities and determine optimal weighting criteria from non-Gaussian fields. The results are relevant to frequency retrieval in radiation scattered from random media

  6. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  7. Learning conditional Gaussian networks

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers conditional Gaussian networks. The parameters in the network are learned by using conjugate Bayesian analysis. As conjugate local priors, we apply the Dirichlet distribution for discrete variables and the Gaussian-inverse gamma distribution for continuous variables, given...... a configuration of the discrete parents. We assume parameter independence and complete data. Further, to learn the structure of the network, the network score is deduced. We then develop a local master prior procedure, for deriving parameter priors in these networks. This procedure satisfies parameter...... independence, parameter modularity and likelihood equivalence. Bayes factors to be used in model search are introduced. Finally the methods derived are illustrated by a simple example....

  8. Mixture regression models for the gap time distributions and illness-death processes.

    Science.gov (United States)

    Huang, Chia-Hui

    2018-01-27

    The aim of this study is to provide an analysis of gap event times under the illness-death model, where some subjects experience "illness" before "death" and others experience only "death." Which event is more likely to occur first and how the duration of the "illness" influences the "death" event are of interest. Because the occurrence of the second event is subject to dependent censoring, it can lead to bias in the estimation of model parameters. In this work, we generalize the semiparametric mixture models for competing risks data to accommodate the subsequent event and use a copula function to model the dependent structure between the successive events. Under the proposed method, the survival function of the censoring time does not need to be estimated when developing the inference procedure. We incorporate the cause-specific hazard functions with the counting process approach and derive a consistent estimation using the nonparametric maximum likelihood method. Simulations are conducted to demonstrate the performance of the proposed analysis, and its application in a clinical study on chronic myeloid leukemia is reported to illustrate its utility.

  9. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  10. AUTONOMOUS GAUSSIAN DECOMPOSITION

    International Nuclear Information System (INIS)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John

    2015-01-01

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes

  11. Quantum information with Gaussian states

    International Nuclear Information System (INIS)

    Wang Xiangbin; Hiroshima, Tohya; Tomita, Akihisa; Hayashi, Masahito

    2007-01-01

    Quantum optical Gaussian states are a type of important robust quantum states which are manipulatable by the existing technologies. So far, most of the important quantum information experiments are done with such states, including bright Gaussian light and weak Gaussian light. Extending the existing results of quantum information with discrete quantum states to the case of continuous variable quantum states is an interesting theoretical job. The quantum Gaussian states play a central role in such a case. We review the properties and applications of Gaussian states in quantum information with emphasis on the fundamental concepts, the calculation techniques and the effects of imperfections of the real-life experimental setups. Topics here include the elementary properties of Gaussian states and relevant quantum information device, entanglement-based quantum tasks such as quantum teleportation, quantum cryptography with weak and strong Gaussian states and the quantum channel capacity, mathematical theory of quantum entanglement and state estimation for Gaussian states

  12. Quantum steering of multimode Gaussian states by Gaussian measurements: monogamy relations and the Peres conjecture

    International Nuclear Information System (INIS)

    Ji, Se-Wan; Nha, Hyunchul; Kim, M S

    2015-01-01

    It is a topic of fundamental and practical importance how a quantum correlated state can be reliably distributed through a noisy channel for quantum information processing. The concept of quantum steering recently defined in a rigorous manner is relevant to study it under certain circumstances and here we address quantum steerability of Gaussian states to this aim. In particular, we attempt to reformulate the criterion for Gaussian steering in terms of local and global purities and show that it is sufficient and necessary for the case of steering a 1-mode system by an N-mode system. It subsequently enables us to reinforce a strong monogamy relation under which only one party can steer a local system of 1-mode. Moreover, we show that only a negative partial-transpose state can manifest quantum steerability by Gaussian measurements in relation to the Peres conjecture. We also discuss our formulation for the case of distributing a two-mode squeezed state via one-way quantum channels making dissipation and amplification effects, respectively. Finally, we extend our approach to include non-Gaussian measurements, more precisely, all orders of higher-order squeezing measurements, and find that this broad set of non-Gaussian measurements is not useful to demonstrate steering for Gaussian states beyond Gaussian measurements. (paper)

  13. Regressão e crescimento do primogênito no processo de tornar-se irmão Firstborn's regression and growth in the process of becoming a sibling

    Directory of Open Access Journals (Sweden)

    Débora Silva Oliveira

    2013-03-01

    Full Text Available Investigaram-se indicadores de regressão e crescimento do primogênito no processo de tornar-se irmão. Participaram três primogênitos pré-escolares no terceiro trimestre de gestação, aos 12 e 24 meses do irmão. Foi aplicado o Teste das Fábulas e realizada análise qualitativa de conteúdo. Os resultados revelaram regressão do primogênito na gestação materna e crescimento, aos 12 e aos 24 meses de idade do irmão. A regressão foi uma forma de enfrentar a chegada do irmão, enquanto que o crescimento revelou capacidade para conquistas ou custos de ser mais velho. Tanto a regressão quanto o crescimento oportunizaram um ir e vir saudável, fundamental para o desenvolvimento rumo à independência. Esses achados têm implicações para a pesquisa e para a clínica.Regression and growth indicators in the process of becoming a sibling were investigated. Three firstborns took part in the study during the first sibling's third trimester of pregnancy, and when the sibling was 12 and 24 months old, respectively. The Fables Test was used and a qualitative content analysis was carried out. Results revealed regression indicators during pregnancy. At 12 and 24 months there were growth indicators together with regression indicators. Regression was used by the firstborn for coping with the sibling's arrival while growth revealed the capacity for acquisitions or the costs of being an older sibling. Both regressive and growth manifestations enabled a healthy to and fro, which is fundamental for development towards independence. These findings have both research and clinical implications.

  14. Gaussian discriminating strength

    Science.gov (United States)

    Rigovacca, L.; Farace, A.; De Pasquale, A.; Giovannetti, V.

    2015-10-01

    We present a quantifier of nonclassical correlations for bipartite, multimode Gaussian states. It is derived from the Discriminating Strength measure, introduced for finite dimensional systems in Farace et al., [New J. Phys. 16, 073010 (2014), 10.1088/1367-2630/16/7/073010]. As the latter the new measure exploits the quantum Chernoff bound to gauge the susceptibility of the composite system with respect to local perturbations induced by unitary gates extracted from a suitable set of allowed transformations (the latter being identified by posing some general requirements). Closed expressions are provided for the case of two-mode Gaussian states obtained by squeezing or by linearly mixing via a beam splitter a factorized two-mode thermal state. For these density matrices, we study how nonclassical correlations are related with the entanglement present in the system and with its total photon number.

  15. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  16. Determination of regression functions for the charging and discharging processes of valve regulated lead-acid batteries

    Directory of Open Access Journals (Sweden)

    Vukić Vladimir Đ.

    2012-01-01

    Full Text Available Following a deep discharge of AGM SVT 300 valve-regulated lead-acid batteries using the ten-hour discharge current, the batteries were charged using variable current. In accordance with the obtained results, exponential and polynomial functions for the approximation of the specified processes were analyzed. The main evaluation instrument for the quality of the implemented approximations was the adjusted coefficient of determination R-2. It was perceived that the battery discharge process might be successfully approximated with both an exponential and the second order polynomial function. On all the occasions analyzed, values of the adjusted coefficient of determination were greater than 0.995. The charging process of the deeply discharged batteries was successfully approximated with the exponential function; the measured values of the adjusted coefficient of determination being nearly 0.95. Apart from the high measured values of the adjusted coefficient of determination, polynomial approximations of the second and third order did not provide satisfactory results regarding the interpolation of the battery charging characteristics. A possibility for a practical implementation of the procured regression functions in uninterruptible power supply systems was described.

  17. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  18. Interconversion of pure Gaussian states requiring non-Gaussian operations

    Science.gov (United States)

    Jabbour, Michael G.; García-Patrón, Raúl; Cerf, Nicolas J.

    2015-01-01

    We analyze the conditions under which local operations and classical communication enable entanglement transformations between bipartite pure Gaussian states. A set of necessary and sufficient conditions had been found [G. Giedke et al., Quant. Inf. Comput. 3, 211 (2003)] for the interconversion between such states that is restricted to Gaussian local operations and classical communication. Here, we exploit majorization theory in order to derive more general (sufficient) conditions for the interconversion between bipartite pure Gaussian states that goes beyond Gaussian local operations. While our technique is applicable to an arbitrary number of modes for each party, it allows us to exhibit surprisingly simple examples of 2 ×2 Gaussian states that necessarily require non-Gaussian local operations to be transformed into each other.

  19. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sø rbye, Sigrunn Holbek; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  20. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sørbye, Sigrunn Holbek

    2017-07-07

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  1. Sums and Gaussian vectors

    CERN Document Server

    Yurinsky, Vadim Vladimirovich

    1995-01-01

    Surveys the methods currently applied to study sums of infinite-dimensional independent random vectors in situations where their distributions resemble Gaussian laws. Covers probabilities of large deviations, Chebyshev-type inequalities for seminorms of sums, a method of constructing Edgeworth-type expansions, estimates of characteristic functions for random vectors obtained by smooth mappings of infinite-dimensional sums to Euclidean spaces. A self-contained exposition of the modern research apparatus around CLT, the book is accessible to new graduate students, and can be a useful reference for researchers and teachers of the subject.

  2. Integrating principal component analysis and vector quantization with support vector regression for sulfur content prediction in HDS process

    Directory of Open Access Journals (Sweden)

    Shokri Saeid

    2015-01-01

    Full Text Available An accurate prediction of sulfur content is very important for the proper operation and product quality control in hydrodesulfurization (HDS process. For this purpose, a reliable data- driven soft sensors utilizing Support Vector Regression (SVR was developed and the effects of integrating Vector Quantization (VQ with Principle Component Analysis (PCA were studied on the assessment of this soft sensor. First, in pre-processing step the PCA and VQ techniques were used to reduce dimensions of the original input datasets. Then, the compressed datasets were used as input variables for the SVR model. Experimental data from the HDS setup were employed to validate the proposed integrated model. The integration of VQ/PCA techniques with SVR model was able to increase the prediction accuracy of SVR. The obtained results show that integrated technique (VQ-SVR was better than (PCA-SVR in prediction accuracy. Also, VQ decreased the sum of the training and test time of SVR model in comparison with PCA. For further evaluation, the performance of VQ-SVR model was also compared to that of SVR. The obtained results indicated that VQ-SVR model delivered the best satisfactory predicting performance (AARE= 0.0668 and R2= 0.995 in comparison with investigated models.

  3. Open problems in Gaussian fluid queueing theory

    NARCIS (Netherlands)

    Dȩbicki, K.; Mandjes, M.

    2011-01-01

    We present three challenging open problems that originate from the analysis of the asymptotic behavior of Gaussian fluid queueing models. In particular, we address the problem of characterizing the correlation structure of the stationary buffer content process, the speed of convergence to

  4. Oracle Wiener filtering of a Gaussian signal

    NARCIS (Netherlands)

    Babenko, A.; Belitser, E.

    2011-01-01

    We study the problem of filtering a Gaussian process whose trajectories, in some sense, have an unknown smoothness ß0 from the white noise of small intensity e. If we knew the parameter ß0, we would use the Wiener filter which has the meaning of oracle. Our goal is now to mimic the oracle, i.e.,

  5. Oracle Wiener filtering of a Gaussian signal

    NARCIS (Netherlands)

    Babenko, A.; Belitser, E.N.

    2011-01-01

    We study the problem of filtering a Gaussian process whose trajectories, in some sense, have an unknown smoothness β0 from the white noise of small intensity . If we knew the parameter β0, we would use the Wiener filter which has the meaning of oracle. Our goal is now to mimic the oracle, i.e.,

  6. Framework for developing hybrid process-driven, artificial neural network and regression models for salinity prediction in river systems

    Science.gov (United States)

    Hunter, Jason M.; Maier, Holger R.; Gibbs, Matthew S.; Foale, Eloise R.; Grosvenor, Naomi A.; Harders, Nathan P.; Kikuchi-Miller, Tahali C.

    2018-05-01

    Salinity modelling in river systems is complicated by a number of processes, including in-stream salt transport and various mechanisms of saline accession that vary dynamically as a function of water level and flow, often at different temporal scales. Traditionally, salinity models in rivers have either been process- or data-driven. The primary problem with process-based models is that in many instances, not all of the underlying processes are fully understood or able to be represented mathematically. There are also often insufficient historical data to support model development. The major limitation of data-driven models, such as artificial neural networks (ANNs) in comparison, is that they provide limited system understanding and are generally not able to be used to inform management decisions targeting specific processes, as different processes are generally modelled implicitly. In order to overcome these limitations, a generic framework for developing hybrid process and data-driven models of salinity in river systems is introduced and applied in this paper. As part of the approach, the most suitable sub-models are developed for each sub-process affecting salinity at the location of interest based on consideration of model purpose, the degree of process understanding and data availability, which are then combined to form the hybrid model. The approach is applied to a 46 km reach of the Murray River in South Australia, which is affected by high levels of salinity. In this reach, the major processes affecting salinity include in-stream salt transport, accession of saline groundwater along the length of the reach and the flushing of three waterbodies in the floodplain during overbank flows of various magnitudes. Based on trade-offs between the degree of process understanding and data availability, a process-driven model is developed for in-stream salt transport, an ANN model is used to model saline groundwater accession and three linear regression models are used

  7. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    Science.gov (United States)

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without

  8. Rotating quantum Gaussian packets

    International Nuclear Information System (INIS)

    Dodonov, V V

    2015-01-01

    We study two-dimensional quantum Gaussian packets with a fixed value of mean angular momentum. This value is the sum of two independent parts: the ‘external’ momentum related to the motion of the packet center and the ‘internal’ momentum due to quantum fluctuations. The packets minimizing the mean energy of an isotropic oscillator with the fixed mean angular momentum are found. They exist for ‘co-rotating’ external and internal motions, and they have nonzero correlation coefficients between coordinates and momenta, together with some (moderate) amount of quadrature squeezing. Variances of angular momentum and energy are calculated, too. Differences in the behavior of ‘co-rotating’ and ‘anti-rotating’ packets are shown. The time evolution of rotating Gaussian packets is analyzed, including the cases of a charge in a homogeneous magnetic field and a free particle. In the latter case, the effect of initial shrinking of packets with big enough coordinate-momentum correlation coefficients (followed by the well known expansion) is discovered. This happens due to a competition of ‘focusing’ and ‘de-focusing’ in the orthogonal directions. (paper)

  9. Bayesian ARTMAP for regression.

    Science.gov (United States)

    Sasu, L M; Andonie, R

    2013-10-01

    Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    Science.gov (United States)

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct

  11. Accounting for Regressive Eye-Movements in Models of Sentence Processing: A Reappraisal of the Selective Reanalysis Hypothesis

    Science.gov (United States)

    Mitchell, Don C.; Shen, Xingjia; Green, Matthew J.; Hodgson, Timothy L.

    2008-01-01

    When people read temporarily ambiguous sentences, there is often an increased prevalence of regressive eye-movements launched from the word that resolves the ambiguity. Traditionally, such regressions have been interpreted at least in part as reflecting readers' efforts to re-read and reconfigure earlier material, as exemplified by the Selective…

  12. Impact of the Processes of Total Testicular Regression and Recrudescence on the Epididymal Physiology of the Bat Myotis nigricans (Chiroptera: Vespertilionidae.

    Directory of Open Access Journals (Sweden)

    Mateus R Beguelini

    Full Text Available Myotis nigricans is a species of vespertilionid bat, whose males show two periods of total testicular regression within the same annual reproductive cycle in the northwest São Paulo State, Brazil. Studies have demonstrated that its epididymis has an elongation of the caudal portion, which stores spermatozoa during the period of testicular regression in July, but that they had no sperm during the regression in November. Thus, the aim of this study was to analyze the impact of the total testicular regression in the epididymal morphophysiology and patterns of its hormonal regulation. The results demonstrate a continuous activity of the epididymis from the Active to the Regressing periods; a morphofunctional regression of the epididymis in the Regressed period; and a slow recrudescence process. Thus, we concluded that the processes of total testicular regression and posterior recrudescence suffered by M. nigricans also impact the physiology of the epididymis, but with a delay in epididymal response. Epididymal physiology is regulated by testosterone and estrogen, through the production and secretion of testosterone by the testes, its conduction to the epididymis (mainly through luminal fluid, conversion of testosterone to dihydrotestosterone by the 5α-reductase enzyme (mainly in epithelial cells and to estrogen by aromatase; and through the activation/deactivation of the androgen receptor and estrogen receptor α in epithelial cells, which regulate the epithelial cell morphophysiology, prevents cell death and regulates their protein expression and secretion, which ensures the maturation and storage of the spermatozoa.

  13. Holographic non-Gaussianity

    International Nuclear Information System (INIS)

    McFadden, Paul; Skenderis, Kostas

    2011-01-01

    We investigate the non-Gaussianity of primordial cosmological perturbations within our recently proposed holographic description of inflationary universes. We derive a holographic formula that determines the bispectrum of cosmological curvature perturbations in terms of correlation functions of a holographically dual three-dimensional non-gravitational quantum field theory (QFT). This allows us to compute the primordial bispectrum for a universe which started in a non-geometric holographic phase, using perturbative QFT calculations. Strikingly, for a class of models specified by a three-dimensional super-renormalisable QFT, the primordial bispectrum is of exactly the factorisable equilateral form with f NL equil. = 5/36, irrespective of the details of the dual QFT. A by-product of this investigation is a holographic formula for the three-point function of the trace of the stress-energy tensor along general holographic RG flows, which should have applications outside the remit of this work

  14. Geometry of Gaussian quantum states

    International Nuclear Information System (INIS)

    Link, Valentin; Strunz, Walter T

    2015-01-01

    We study the Hilbert–Schmidt measure on the manifold of mixed Gaussian states in multi-mode continuous variable quantum systems. An analytical expression for the Hilbert–Schmidt volume element is derived. Its corresponding probability measure can be used to study typical properties of Gaussian states. It turns out that although the manifold of Gaussian states is unbounded, an ensemble of Gaussian states distributed according to this measure still has a normalizable distribution of symplectic eigenvalues, from which unitarily invariant properties can be obtained. By contrast, we find that for an ensemble of one-mode Gaussian states based on the Bures measure the corresponding distribution cannot be normalized. As important applications, we determine the distribution and the mean value of von Neumann entropy and purity for the Hilbert–Schmidt measure. (paper)

  15. Autistic Regression

    Science.gov (United States)

    Matson, Johnny L.; Kozlowski, Alison M.

    2010-01-01

    Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

  16. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  17. Colored non-gaussian noise driven open systems: generalization of Kramers' theory with a unified approach.

    Science.gov (United States)

    Baura, Alendu; Sen, Monoj Kumar; Goswami, Gurupada; Bag, Bidhan Chandra

    2011-01-28

    In this paper we have calculated escape rate from a meta stable state in the presence of both colored internal thermal and external nonthermal noises. For the internal noise we have considered usual gaussian distribution but the external noise may be gaussian or non-gaussian in characteristic. The calculated rate is valid for low noise strength of non-gaussian noise such that an effective gaussian approximation of non-gaussian noise wherein the higher order even cumulants of order "4" and higher are neglected. The rate expression we derived here reduces to the known results of the literature, as well as for purely external noise driven activated rate process. The latter exhibits how the rate changes if one switches from non-gaussian to gaussian character of the external noise.

  18. Handbook of Gaussian basis sets

    International Nuclear Information System (INIS)

    Poirier, R.; Kari, R.; Csizmadia, I.G.

    1985-01-01

    A collection of a large body of information is presented useful for chemists involved in molecular Gaussian computations. Every effort has been made by the authors to collect all available data for cartesian Gaussian as found in the literature up to July of 1984. The data in this text includes a large collection of polarization function exponents but in this case the collection is not complete. Exponents for Slater type orbitals (STO) were included for completeness. This text offers a collection of Gaussian exponents primarily without criticism. (Auth.)

  19. Image reconstruction under non-Gaussian noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica

    During acquisition and transmission, images are often blurred and corrupted by noise. One of the fundamental tasks of image processing is to reconstruct the clean image from a degraded version. The process of recovering the original image from the data is an example of inverse problem. Due...... to the ill-posedness of the problem, the simple inversion of the degradation model does not give any good reconstructions. Therefore, to deal with the ill-posedness it is necessary to use some prior information on the solution or the model and the Bayesian approach. Additive Gaussian noise has been......D thesis intends to solve some of the many open questions for image restoration under non-Gaussian noise. The two main kinds of noise studied in this PhD project are the impulse noise and the Cauchy noise. Impulse noise is due to for instance the malfunctioning pixel elements in the camera sensors, errors...

  20. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  1. Extended Linear Models with Gaussian Priors

    DEFF Research Database (Denmark)

    Quinonero, Joaquin

    2002-01-01

    In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....

  2. The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects

    Science.gov (United States)

    Laszlo, Sarah; Federmeier, Kara D.

    2010-01-01

    Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252

  3. Remote-sensing data processing with the multivariate regression analysis method for iron mineral resource potential mapping: a case study in the Sarvian area, central Iran

    Science.gov (United States)

    Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran

    2018-03-01

    This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).

  4. Information geometry of Gaussian channels

    International Nuclear Information System (INIS)

    Monras, Alex; Illuminati, Fabrizio

    2010-01-01

    We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirable properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).

  5. Comparison of Gaussian and non-Gaussian Atmospheric Profile Retrievals from Satellite Microwave Data

    Science.gov (United States)

    Kliewer, A.; Forsythe, J. M.; Fletcher, S. J.; Jones, A. S.

    2017-12-01

    The Cooperative Institute for Research in the Atmosphere at Colorado State University has recently developed two different versions of a mixed-distribution (lognormal combined with a Gaussian) based microwave temperature and mixing ratio retrieval system as well as the original Gaussian-based approach. These retrieval systems are based upon 1DVAR theory but have been adapted to use different descriptive statistics of the lognormal distribution to minimize the background errors. The input radiance data is from the AMSU-A and MHS instruments on the NOAA series of spacecraft. To help illustrate how the three retrievals are affected by the change in the distribution we are in the process of creating a new website to show the output from the different retrievals. Here we present initial results from different dynamical situations to show how the tool could be used by forecasters as well as for educators. However, as the new retrieved values are from a non-Gaussian based 1DVAR then they will display non-Gaussian behaviors that need to pass a quality control measure that is consistent with this distribution, and these new measures are presented here along with initial results for checking the retrievals.

  6. Gaussian entanglement distribution via satellite

    Science.gov (United States)

    Hosseinidehaj, Nedasadat; Malaney, Robert

    2015-02-01

    In this work we analyze three quantum communication schemes for the generation of Gaussian entanglement between two ground stations. Communication occurs via a satellite over two independent atmospheric fading channels dominated by turbulence-induced beam wander. In our first scheme, the engineering complexity remains largely on the ground transceivers, with the satellite acting simply as a reflector. Although the channel state information of the two atmospheric channels remains unknown in this scheme, the Gaussian entanglement generation between the ground stations can still be determined. On the ground, distillation and Gaussification procedures can be applied, leading to a refined Gaussian entanglement generation rate between the ground stations. We compare the rates produced by this first scheme with two competing schemes in which quantum complexity is added to the satellite, thereby illustrating the tradeoff between space-based engineering complexity and the rate of ground-station entanglement generation.

  7. Revisiting non-Gaussianity from non-attractor inflation models

    Science.gov (United States)

    Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei

    2018-05-01

    Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.

  8. Tachyon mediated non-Gaussianity

    International Nuclear Information System (INIS)

    Dutta, Bhaskar; Leblond, Louis; Kumar, Jason

    2008-01-01

    We describe a general scenario where primordial non-Gaussian curvature perturbations are generated in models with extra scalar fields. The extra scalars communicate to the inflaton sector mainly through the tachyonic (waterfall) field condensing at the end of hybrid inflation. These models can yield significant non-Gaussianity of the local shape, and both signs of the bispectrum can be obtained. These models have cosmic strings and a nearly flat power spectrum, which together have been recently shown to be a good fit to WMAP data. We illustrate with a model of inflation inspired from intersecting brane models.

  9. Geographically weighted regression model on poverty indicator

    Science.gov (United States)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  10. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Progressive and Regressive Developmental Changes in Neural Substrates for Face Processing: Testing Specific Predictions of the Interactive Specialization Account

    Science.gov (United States)

    Joseph, Jane E.; Gathers, Ann D.; Bhatt, Ramesh S.

    2011-01-01

    Face processing undergoes a fairly protracted developmental time course but the neural underpinnings are not well understood. Prior fMRI studies have only examined progressive changes (i.e. increases in specialization in certain regions with age), which would be predicted by both the Interactive Specialization (IS) and maturational theories of…

  12. The Multivariate Gaussian Probability Distribution

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2005-01-01

    This technical report intends to gather information about the multivariate gaussian distribution, that was previously not (at least to my knowledge) to be found in one place and written as a reference manual. Additionally, some useful tips and tricks are collected that may be useful in practical ...

  13. On Gaussian conditional independence structures

    Czech Academy of Sciences Publication Activity Database

    Lněnička, Radim; Matúš, František

    2007-01-01

    Roč. 43, č. 3 (2007), s. 327-342 ISSN 0023-5954 R&D Projects: GA AV ČR IAA100750603 Institutional research plan: CEZ:AV0Z10750506 Keywords : multivariate Gaussian distribution * positive definite matrices * determinants * gaussoids * covariance selection models * Markov perfectness Subject RIV: BA - General Mathematics Impact factor: 0.552, year: 2007

  14. Strategy and model building in the fourth dimension: a null model for genotype x age interaction as a Gaussian stationary stochastic process.

    Science.gov (United States)

    Diego, Vincent P; Almasy, Laura; Dyer, Thomas D; Soler, Júlia M P; Blangero, John

    2003-12-31

    Using univariate and multivariate variance components linkage analysis methods, we studied possible genotype x age interaction in cardiovascular phenotypes related to the aging process from the Framingham Heart Study. We found evidence for genotype x age interaction for fasting glucose and systolic blood pressure. There is polygenic genotype x age interaction for fasting glucose and systolic blood pressure and quantitative trait locus x age interaction for a linkage signal for systolic blood pressure phenotypes located on chromosome 17 at 67 cM.

  15. Current inversion induced by colored non-Gaussian noise

    International Nuclear Information System (INIS)

    Bag, Bidhan Chandra; Hu, Chin-Kung

    2009-01-01

    We study a stochastic process driven by colored non-Gaussian noises. For the flashing ratchet model we find that there is a current inversion in the variation of the current with the half-cycle period which accounts for the potential on–off operation. The current inversion almost disappears if one switches from non-Gaussian (NG) to Gaussian (G) noise. We also find that at low value of the asymmetry parameter of the potential the mobility controlled current is more negative for NG noise as compared to G noise. But at large magnitude of the parameter the diffusion controlled positive current is higher for the former than for the latter. On increasing the noise correlation time (τ), keeping the noise strength fixed, the mean velocity of a particle first increases and then decreases after passing through a maximum if the noise is non-Gaussian. For Gaussian noise, the current monotonically decreases. The current increases with the noise parameter p, 0< p<5/3, which is 1 for Gaussian noise

  16. Passivity and practical work extraction using Gaussian operations

    International Nuclear Information System (INIS)

    Brown, Eric G; Huber, Marcus; Friis, Nicolai

    2016-01-01

    Quantum states that can yield work in a cyclical Hamiltonian process form one of the primary resources in the context of quantum thermodynamics. Conversely, states whose average energy cannot be lowered by unitary transformations are called passive. However, while work may be extracted from non-passive states using arbitrary unitaries, the latter may be hard to realize in practice. It is therefore pertinent to consider the passivity of states under restricted classes of operations that can be feasibly implemented. Here, we ask how restrictive the class of Gaussian unitaries is for the task of work extraction. We investigate the notion of Gaussian passivity, that is, we present necessary and sufficient criteria identifying all states whose energy cannot be lowered by Gaussian unitaries. For all other states we give a prescription for the Gaussian operations that extract the maximal amount of energy. Finally, we show that the gap between passivity and Gaussian passivity is maximal, i.e., Gaussian-passive states may still have a maximal amount of energy that is extractable by arbitrary unitaries, even under entropy constraints. (paper)

  17. Building information for systematic improvement of the prevention of hospital-acquired pressure ulcers with statistical process control charts and regression.

    Science.gov (United States)

    Padula, William V; Mishra, Manish K; Weaver, Christopher D; Yilmaz, Taygan; Splaine, Mark E

    2012-06-01

    To demonstrate complementary results of regression and statistical process control (SPC) chart analyses for hospital-acquired pressure ulcers (HAPUs), and identify possible links between changes and opportunities for improvement between hospital microsystems and macrosystems. Ordinary least squares and panel data regression of retrospective hospital billing data, and SPC charts of prospective patient records for a US tertiary-care facility (2004-2007). A prospective cohort of hospital inpatients at risk for HAPUs was the study population. There were 337 HAPU incidences hospital wide among 43 844 inpatients. A probit regression model predicted the correlation of age, gender and length of stay on HAPU incidence (pseudo R(2)=0.096). Panel data analysis determined that for each additional day in the hospital, there was a 0.28% increase in the likelihood of HAPU incidence. A p-chart of HAPU incidence showed a mean incidence rate of 1.17% remaining in statistical control. A t-chart showed the average time between events for the last 25 HAPUs was 13.25 days. There was one 57-day period between two incidences during the observation period. A p-chart addressing Braden scale assessments showed that 40.5% of all patients were risk stratified for HAPUs upon admission. SPC charts complement standard regression analysis. SPC amplifies patient outcomes at the microsystem level and is useful for guiding quality improvement. Macrosystems should monitor effective quality improvement initiatives in microsystems and aid the spread of successful initiatives to other microsystems, followed by system-wide analysis with regression. Although HAPU incidence in this study is below the national mean, there is still room to improve HAPU incidence in this hospital setting since 0% incidence is theoretically achievable. Further assessment of pressure ulcer incidence could illustrate improvement in the quality of care and prevent HAPUs.

  18. Laguerre Gaussian beam multiplexing through turbulence

    CSIR Research Space (South Africa)

    Trichili, A

    2014-08-17

    Full Text Available We analyze the effect of atmospheric turbulence on the propagation of multiplexed Laguerre Gaussian modes. We present a method to multiplex Laguerre Gaussian modes using digital holograms and decompose the resulting field after encountering a...

  19. Analytic matrix elements with shifted correlated Gaussians

    DEFF Research Database (Denmark)

    Fedorov, D. V.

    2017-01-01

    Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics.......Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics....

  20. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  1. Gaussian statistics for palaeomagnetic vectors

    Science.gov (United States)

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to

  2. Gaussian statistics for palaeomagnetic vectors

    Science.gov (United States)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to

  3. Stable Lévy motion with inverse Gaussian subordinator

    Science.gov (United States)

    Kumar, A.; Wyłomańska, A.; Gajda, J.

    2017-09-01

    In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.

  4. Inflation in random Gaussian landscapes

    Energy Technology Data Exchange (ETDEWEB)

    Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu [Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155 (United States)

    2017-05-01

    We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer from potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.

  5. Multi-Response Optimization and Regression Analysis of Process Parameters for Wire-EDMed HCHCr Steel Using Taguchi’s Technique

    Directory of Open Access Journals (Sweden)

    K. Srujay Varma

    2017-04-01

    Full Text Available In this study, effect of machining process parameters viz. pulse-on time, pulse-off time, current and servo-voltage for machining High Carbon High Chromium Steel (HCHCr using copper electrode in wire EDM was investigated. High Carbon High Chromium Steel is a difficult to machine alloy, which has many applications in low temperature manufacturing, and copper is chosen as electrode as it has good electrical conductivity and most frequently used electrode all over the world. Tool making culture of copper has made many shops in Europe and Japan to used copper electrode. Experiments were conducted according to Taguchi’s technique by varying the machining process parameters at three levels. Taguchi’s method based on L9 orthogonal array was followed and number of experiments was limited to 9. Experimental cost and time consumption was reduced by following this statistical technique. Targeted output parameters are Material Removal Rate (MRR, Vickers Hardness (HV and Surface Roughness (SR. Analysis of Variance (ANOVA and Regression Analysis was performed using Minitab 17 software to optimize the parameters and draw relationship between input and output process parameters. Regression models were developed relating input and output parameters. It was observed that most influential factor for MRR, Hardness and SR are Ton, Toff and SV.

  6. General Galilei Covariant Gaussian Maps

    Science.gov (United States)

    Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo

    2017-09-01

    We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].

  7. Gaussian Embeddings for Collaborative Filtering

    OpenAIRE

    Dos Santos , Ludovic; Piwowarski , Benjamin; Gallinari , Patrick

    2017-01-01

    International audience; Most collaborative ltering systems, such as matrix factorization, use vector representations for items and users. Those representations are deterministic, and do not allow modeling the uncertainty of the learned representation, which can be useful when a user has a small number of rated items (cold start), or when there is connict-ing information about the behavior of a user or the ratings of an item. In this paper, we leverage recent works in learning Gaussian embeddi...

  8. Scaled unscented transform Gaussian sum filter: Theory and application

    KAUST Repository

    Luo, Xiaodong

    2010-05-01

    In this work we consider the state estimation problem in nonlinear/non-Gaussian systems. We introduce a framework, called the scaled unscented transform Gaussian sum filter (SUT-GSF), which combines two ideas: the scaled unscented Kalman filter (SUKF) based on the concept of scaled unscented transform (SUT) (Julier and Uhlmann (2004) [16]), and the Gaussian mixture model (GMM). The SUT is used to approximate the mean and covariance of a Gaussian random variable which is transformed by a nonlinear function, while the GMM is adopted to approximate the probability density function (pdf) of a random variable through a set of Gaussian distributions. With these two tools, a framework can be set up to assimilate nonlinear systems in a recursive way. Within this framework, one can treat a nonlinear stochastic system as a mixture model of a set of sub-systems, each of which takes the form of a nonlinear system driven by a known Gaussian random process. Then, for each sub-system, one applies the SUKF to estimate the mean and covariance of the underlying Gaussian random variable transformed by the nonlinear governing equations of the sub-system. Incorporating the estimations of the sub-systems into the GMM gives an explicit (approximate) form of the pdf, which can be regarded as a "complete" solution to the state estimation problem, as all of the statistical information of interest can be obtained from the explicit form of the pdf (Arulampalam et al. (2002) [7]). In applications, a potential problem of a Gaussian sum filter is that the number of Gaussian distributions may increase very rapidly. To this end, we also propose an auxiliary algorithm to conduct pdf re-approximation so that the number of Gaussian distributions can be reduced. With the auxiliary algorithm, in principle the SUT-GSF can achieve almost the same computational speed as the SUKF if the SUT-GSF is implemented in parallel. As an example, we will use the SUT-GSF to assimilate a 40-dimensional system due to

  9. Adaptive Filtering for Non-Gaussian Processes

    DEFF Research Database (Denmark)

    Kidmose, Preben

    2000-01-01

    A new stochastic gradient robust filtering method, based on a non-linear amplitude transformation, is proposed. The method requires no a priori knowledge of the characteristics of the input signals and it is insensitive to the signals distribution and to the stationarity of the signals. A simulat...

  10. Statistics of peaks of Gaussian random fields

    International Nuclear Information System (INIS)

    Bardeen, J.M.; Bond, J.R.; Kaiser, N.; Szalay, A.S.; Stanford Univ., CA; California Univ., Berkeley; Cambridge Univ., England; Fermi National Accelerator Lab., Batavia, IL)

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of upcrossing points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima. 67 references

  11. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  12. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Permutation entropy of fractional Brownian motion and fractional Gaussian noise

    International Nuclear Information System (INIS)

    Zunino, L.; Perez, D.G.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2008-01-01

    We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays

  14. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Ahmed, Sajid

    2016-01-13

    Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.

  15. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Ahmed, Sajid; Alouini, Mohamed-Slim; Jardak, Seifallah

    2016-01-01

    Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.

  16. Detection of range migrating targets in compound-Gaussian clutter

    NARCIS (Netherlands)

    Petrov, N.; le Chevalier, F.; Yarovyi, O.

    2018-01-01

    This paper deals with the problem of coherent radar detection of fast moving targets in a high range resolution mode. In particular, we are focusing on the spiky clutter modeled as a compound Gaussian process with rapidly varying power along range. Additionally, a fast moving target of interest has

  17. Multivariate Multiple Regression Models for a Big Data-Empowered SON Framework in Mobile Wireless Networks

    Directory of Open Access Journals (Sweden)

    Yoonsu Shin

    2016-01-01

    Full Text Available In the 5G era, the operational cost of mobile wireless networks will significantly increase. Further, massive network capacity and zero latency will be needed because everything will be connected to mobile networks. Thus, self-organizing networks (SON are needed, which expedite automatic operation of mobile wireless networks, but have challenges to satisfy the 5G requirements. Therefore, researchers have proposed a framework to empower SON using big data. The recent framework of a big data-empowered SON analyzes the relationship between key performance indicators (KPIs and related network parameters (NPs using machine-learning tools, and it develops regression models using a Gaussian process with those parameters. The problem, however, is that the methods of finding the NPs related to the KPIs differ individually. Moreover, the Gaussian process regression model cannot determine the relationship between a KPI and its various related NPs. In this paper, to solve these problems, we proposed multivariate multiple regression models to determine the relationship between various KPIs and NPs. If we assume one KPI and multiple NPs as one set, the proposed models help us process multiple sets at one time. Also, we can find out whether some KPIs are conflicting or not. We implement the proposed models using MapReduce.

  18. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  19. Monogamy inequality for distributed gaussian entanglement.

    Science.gov (United States)

    Hiroshima, Tohya; Adesso, Gerardo; Illuminati, Fabrizio

    2007-02-02

    We show that for all n-mode Gaussian states of continuous variable systems, the entanglement shared among n parties exhibits the fundamental monogamy property. The monogamy inequality is proven by introducing the Gaussian tangle, an entanglement monotone under Gaussian local operations and classical communication, which is defined in terms of the squared negativity in complete analogy with the case of n-qubit systems. Our results elucidate the structure of quantum correlations in many-body harmonic lattice systems.

  20. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  1. Breaking Gaussian incompatibility on continuous variable quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Kiukas, Jukka, E-mail: jukka.kiukas@aber.ac.uk [Department of Mathematics, Aberystwyth University, Penglais, Aberystwyth, SY23 3BZ (United Kingdom); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy)

    2015-08-15

    We characterise Gaussian quantum channels that are Gaussian incompatibility breaking, that is, transform every set of Gaussian measurements into a set obtainable from a joint Gaussian observable via Gaussian postprocessing. Such channels represent local noise which renders measurements useless for Gaussian EPR-steering, providing the appropriate generalisation of entanglement breaking channels for this scenario. Understanding the structure of Gaussian incompatibility breaking channels contributes to the resource theory of noisy continuous variable quantum information protocols.

  2. Learning Inverse Rig Mappings by Nonlinear Regression.

    Science.gov (United States)

    Holden, Daniel; Saito, Jun; Komura, Taku

    2017-03-01

    We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.

  3. White Gaussian Noise - Models for Engineers

    Science.gov (United States)

    Jondral, Friedrich K.

    2018-04-01

    This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.

  4. Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions.

    Science.gov (United States)

    Rativa, Diego; Fernandes, Bruno J T; Roque, Alexandre

    2018-01-01

    Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care.

  5. When non-Gaussian states are Gaussian: Generalization of nonseparability criterion for continuous variables

    International Nuclear Information System (INIS)

    McHugh, Derek; Buzek, Vladimir; Ziman, Mario

    2006-01-01

    We present a class of non-Gaussian two-mode continuous-variable states for which the separability criterion for Gaussian states can be employed to detect whether they are separable or not. These states reduce to the two-mode Gaussian states as a special case

  6. Quantum beamstrahlung from gaussian bunches

    International Nuclear Information System (INIS)

    Chen, P.

    1987-08-01

    The method of Baier and Katkov is applied to calculate the correction terms to the Sokolov-Ternov radiation formula due to the variation of the magnetic field strength along the trajectory of a radiating particle. We carry the calculation up to the second order in the power expansion of B tau/B, where tau is the formation time of radiation. The expression is then used to estimate the quantum beamstrahlung average energy loss from e + e - bunches with gaussian distribution in bunch currents. We show that the effect of the field variation is to reduce the average energy loss from previous calculations based on the Sokolov-Ternov formula or its equivalent. Due to the limitation of our method, only an upper bound of the reduction is obtained. 18 refs

  7. Logistic Regression: Concept and Application

    Science.gov (United States)

    Cokluk, Omay

    2010-01-01

    The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

  8. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  9. Gaussian limit of compact spin systems

    International Nuclear Information System (INIS)

    Bellissard, J.; Angelis, G.F. de

    1981-01-01

    It is shown that the Wilson and Wilson-Villain U(1) models reproduce, in the low coupling limit, the gaussian lattice approximation of the Euclidean electromagnetic field. By the same methods it is also possible to prove that the plane rotator and the Villain model share a common gaussian behaviour in the low temperature limit. (Auth.)

  10. Entanglement in Gaussian matrix-product states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Ericsson, Marie

    2006-01-01

    Gaussian matrix-product states are obtained as the outputs of projection operations from an ancillary space of M infinitely entangled bonds connecting neighboring sites, applied at each of N sites of a harmonic chain. Replacing the projections by associated Gaussian states, the building blocks, we show that the entanglement range in translationally invariant Gaussian matrix-product states depends on how entangled the building blocks are. In particular, infinite entanglement in the building blocks produces fully symmetric Gaussian states with maximum entanglement range. From their peculiar properties of entanglement sharing, a basic difference with spin chains is revealed: Gaussian matrix-product states can possess unlimited, long-range entanglement even with minimum number of ancillary bonds (M=1). Finally we discuss how these states can be experimentally engineered from N copies of a three-mode building block and N two-mode finitely squeezed states

  11. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    DEFF Research Database (Denmark)

    Berg, Jacob; Natarajan, Anand; Mann, Jakob

    2016-01-01

    taking into account the safety factor for extreme moments. Other extreme load moments as well as the fatigue loads are not affected because of the use of non-Gaussian turbulent inflow. It is suggested that the turbine thus acts like a low-pass filter that averages out the non-Gaussian behaviour, which......From large-eddy simulations of atmospheric turbulence, a representation of Gaussian turbulence is constructed by randomizing the phases of the individual modes of variability. Time series of Gaussian turbulence are constructed and compared with its non-Gaussian counterpart. Time series from the two...

  12. Coherence degree of the fundamental Bessel-Gaussian beam in turbulent atmosphere

    Science.gov (United States)

    Lukin, Igor P.

    2017-11-01

    In this article the coherence of a fundamental Bessel-Gaussian optical beam in turbulent atmosphere is analyzed. The problem analysis is based on the solution of the equation for the transverse second-order mutual coherence function of a fundamental Bessel-Gaussian optical beam of optical radiation. The behavior of a coherence degree of a fundamental Bessel-Gaussian optical beam depending on parameters of an optical beam and characteristics of turbulent atmosphere is examined. It was revealed that at low levels of fluctuations in turbulent atmosphere the coherence degree of a fundamental Bessel-Gaussian optical beam has the characteristic oscillating appearance. At high levels of fluctuations in turbulent atmosphere the coherence degree of a fundamental Bessel-Gaussian optical beam is described by an one-scale decreasing curve which in process of increase of level of fluctuations on a line of formation of a laser beam becomes closer to the same characteristic of a spherical optical wave.

  13. Increasing Entanglement between Gaussian States by Coherent Photon Subtraction

    DEFF Research Database (Denmark)

    Ourjoumtsev, Alexei; Dantan, Aurelien Romain; Tualle Brouri, Rosa

    2007-01-01

    We experimentally demonstrate that the entanglement between Gaussian entangled states can be increased by non-Gaussian operations. Coherent subtraction of single photons from Gaussian quadrature-entangled light pulses, created by a nondegenerate parametric amplifier, produces delocalized states...

  14. Efficient Prediction of Low-Visibility Events at Airports Using Machine-Learning Regression

    Science.gov (United States)

    Cornejo-Bueno, L.; Casanova-Mateo, C.; Sanz-Justo, J.; Cerro-Prada, E.; Salcedo-Sanz, S.

    2017-11-01

    We address the prediction of low-visibility events at airports using machine-learning regression. The proposed model successfully forecasts low-visibility events in terms of the runway visual range at the airport, with the use of support-vector regression, neural networks (multi-layer perceptrons and extreme-learning machines) and Gaussian-process algorithms. We assess the performance of these algorithms based on real data collected at the Valladolid airport, Spain. We also propose a study of the atmospheric variables measured at a nearby tower related to low-visibility atmospheric conditions, since they are considered as the inputs of the different regressors. A pre-processing procedure of these input variables with wavelet transforms is also described. The results show that the proposed machine-learning algorithms are able to predict low-visibility events well. The Gaussian process is the best algorithm among those analyzed, obtaining over 98% of the correct classification rate in low-visibility events when the runway visual range is {>}1000 m, and about 80% under this threshold. The performance of all the machine-learning algorithms tested is clearly affected in extreme low-visibility conditions ({algorithm performance in daytime and nighttime conditions, and for different prediction time horizons.

  15. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  16. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    Science.gov (United States)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating

  17. Loop corrections to primordial non-Gaussianity

    Science.gov (United States)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  18. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  19. Non-Gaussianity from isocurvature perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, Masahiro; Nakayama, Kazunori; Sekiguchi, Toyokazu; Suyama, Teruaki [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Takahashi, Fuminobu, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: nakayama@icrr.u-tokyo.ac.jp, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: suyama@icrr.u-tokyo.ac.jp, E-mail: fuminobu.takahashi@ipmu.jp [Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa 277-8568 (Japan)

    2008-11-15

    We develop a formalism for studying non-Gaussianity in both curvature and isocurvature perturbations. It is shown that non-Gaussianity in the isocurvature perturbation between dark matter and photons leaves distinct signatures in the cosmic microwave background temperature fluctuations, which may be confirmed in future experiments, or possibly even in the currently available observational data. As an explicit example, we consider the quantum chromodynamics axion and show that it can actually induce sizable non-Gaussianity for the inflationary scale, H{sub inf} = O(10{sup 9}-10{sup 11}) GeV.

  20. Gaussian measures of entanglement versus negativities: Ordering of two-mode Gaussian states

    International Nuclear Information System (INIS)

    Adesso, Gerardo; Illuminati, Fabrizio

    2005-01-01

    We study the entanglement of general (pure or mixed) two-mode Gaussian states of continuous-variable systems by comparing the two available classes of computable measures of entanglement: entropy-inspired Gaussian convex-roof measures and positive partial transposition-inspired measures (negativity and logarithmic negativity). We first review the formalism of Gaussian measures of entanglement, adopting the framework introduced in M. M. Wolf et al., Phys. Rev. A 69, 052320 (2004), where the Gaussian entanglement of formation was defined. We compute explicitly Gaussian measures of entanglement for two important families of nonsymmetric two-mode Gaussian state: namely, the states of extremal (maximal and minimal) negativities at fixed global and local purities, introduced in G. Adesso et al., Phys. Rev. Lett. 92, 087901 (2004). This analysis allows us to compare the different orderings induced on the set of entangled two-mode Gaussian states by the negativities and by the Gaussian measures of entanglement. We find that in a certain range of values of the global and local purities (characterizing the covariance matrix of the corresponding extremal states), states of minimum negativity can have more Gaussian entanglement of formation than states of maximum negativity. Consequently, Gaussian measures and negativities are definitely inequivalent measures of entanglement on nonsymmetric two-mode Gaussian states, even when restricted to a class of extremal states. On the other hand, the two families of entanglement measures are completely equivalent on symmetric states, for which the Gaussian entanglement of formation coincides with the true entanglement of formation. Finally, we show that the inequivalence between the two families of continuous-variable entanglement measures is somehow limited. Namely, we rigorously prove that, at fixed negativities, the Gaussian measures of entanglement are bounded from below. Moreover, we provide some strong evidence suggesting that they

  1. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  2. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  3. A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    Weerakkody, Sean [Carnegie Mellon Univ., Pittsburgh, PA (United States); Ozel, Omur [Carnegie Mellon Univ., Pittsburgh, PA (United States); Sinopoli, Bruno [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    2017-11-02

    We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the other hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.

  4. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  5. Time rescaling and Gaussian properties of the fractional Brownian motions

    International Nuclear Information System (INIS)

    Maccone, C.

    1981-01-01

    The fractional Brownian motions are proved to be a class of Gaussian (normal) stochastic processes suitably rescaled in time. Some consequences affecting their eigenfunction expansion (Karhunen-Loeve expansion) are inferred. A known formula of Cameron and Martin is generalized. The first-passage time probability density is found. The partial differential equation of the fractional Brownian diffusion is obtained. And finally the increments of the fractional Brownian motions are proved to be independent for nonoverlapping time intervals. (author)

  6. Optimal unitary dilation for bosonic Gaussian channels

    International Nuclear Information System (INIS)

    Caruso, Filippo; Eisert, Jens; Giovannetti, Vittorio; Holevo, Alexander S.

    2011-01-01

    A general quantum channel can be represented in terms of a unitary interaction between the information-carrying system and a noisy environment. In this paper the minimal number of quantum Gaussian environmental modes required to provide a unitary dilation of a multimode bosonic Gaussian channel is analyzed for both pure and mixed environments. We compute this quantity in the case of pure environment corresponding to the Stinespring representation and give an improved estimate in the case of mixed environment. The computations rely, on one hand, on the properties of the generalized Choi-Jamiolkowski state and, on the other hand, on an explicit construction of the minimal dilation for arbitrary bosonic Gaussian channel. These results introduce a new quantity reflecting ''noisiness'' of bosonic Gaussian channels and can be applied to address some issues concerning transmission of information in continuous variables systems.

  7. Galaxy bias and primordial non-Gaussianity

    Energy Technology Data Exchange (ETDEWEB)

    Assassi, Valentin; Baumann, Daniel [DAMTP, Cambridge University, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Schmidt, Fabian, E-mail: assassi@ias.edu, E-mail: D.D.Baumann@uva.nl, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching (Germany)

    2015-12-01

    We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin of any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation.

  8. Optimal cloning of mixed Gaussian states

    International Nuclear Information System (INIS)

    Guta, Madalin; Matsumoto, Keiji

    2006-01-01

    We construct the optimal one to two cloning transformation for the family of displaced thermal equilibrium states of a harmonic oscillator, with a fixed and known temperature. The transformation is Gaussian and it is optimal with respect to the figure of merit based on the joint output state and norm distance. The proof of the result is based on the equivalence between the optimal cloning problem and that of optimal amplification of Gaussian states which is then reduced to an optimization problem for diagonal states of a quantum oscillator. A key concept in finding the optimum is that of stochastic ordering which plays a similar role in the purely classical problem of Gaussian cloning. The result is then extended to the case of n to m cloning of mixed Gaussian states

  9. Encoding information using laguerre gaussian modes

    CSIR Research Space (South Africa)

    Trichili, A

    2015-08-01

    Full Text Available The authors experimentally demonstrate an information encoding protocol using the two degrees of freedom of Laguerre Gaussian modes having different radial and azimuthal components. A novel method, based on digital holography, for information...

  10. Interweave Cognitive Radio with Improper Gaussian Signaling

    KAUST Repository

    Hedhly, Wafa; Amin, Osama; Alouini, Mohamed-Slim

    2018-01-01

    Improper Gaussian signaling (IGS) has proven its ability in improving the performance of underlay and overlay cognitive radio paradigms. In this paper, the interweave cognitive radio paradigm is studied when the cognitive user employs IGS

  11. Galaxy bias and primordial non-Gaussianity

    International Nuclear Information System (INIS)

    Assassi, Valentin; Baumann, Daniel; Schmidt, Fabian

    2015-01-01

    We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin of any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation

  12. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit

    2014-07-28

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  13. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2014-01-01

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  14. American Option Pricing using GARCH models and the Normal Inverse Gaussian distribution

    DEFF Research Database (Denmark)

    Stentoft, Lars Peter

    In this paper we propose a feasible way to price American options in a model with time varying volatility and conditional skewness and leptokurtosis using GARCH processes and the Normal Inverse Gaussian distribution. We show how the risk neutral dynamics can be obtained in this model, we interpret...... properties shows that there are important option pricing differences compared to the Gaussian case as well as to the symmetric special case. A large scale empirical examination shows that our model outperforms the Gaussian case for pricing options on three large US stocks as well as a major index...

  15. Gaussian sum rules for optical functions

    International Nuclear Information System (INIS)

    Kimel, I.

    1981-12-01

    A new (Gaussian) type of sum rules (GSR) for several optical functions, is presented. The functions considered are: dielectric permeability, refractive index, energy loss function, rotatory power and ellipticity (circular dichroism). While reducing to the usual type of sum rules in a certain limit, the GSR contain in general, a Gaussian factor that serves to improve convergence. GSR might be useful in analysing experimental data. (Author) [pt

  16. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  17. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  18. Non-Gaussian halo assembly bias

    International Nuclear Information System (INIS)

    Reid, Beth A.; Verde, Licia; Dolag, Klaus; Matarrese, Sabino; Moscardini, Lauro

    2010-01-01

    The strong dependence of the large-scale dark matter halo bias on the (local) non-Gaussianity parameter, f NL , offers a promising avenue towards constraining primordial non-Gaussianity with large-scale structure surveys. In this paper, we present the first detection of the dependence of the non-Gaussian halo bias on halo formation history using N-body simulations. We also present an analytic derivation of the expected signal based on the extended Press-Schechter formalism. In excellent agreement with our analytic prediction, we find that the halo formation history-dependent contribution to the non-Gaussian halo bias (which we call non-Gaussian halo assembly bias) can be factorized in a form approximately independent of redshift and halo mass. The correction to the non-Gaussian halo bias due to the halo formation history can be as large as 100%, with a suppression of the signal for recently formed halos and enhancement for old halos. This could in principle be a problem for realistic galaxy surveys if observational selection effects were to pick galaxies occupying only recently formed halos. Current semi-analytic galaxy formation models, for example, imply an enhancement in the expected signal of ∼ 23% and ∼ 48% for galaxies at z = 1 selected by stellar mass and star formation rate, respectively

  19. Robust geographically weighted regression of modeling the Air Polluter Standard Index (APSI)

    Science.gov (United States)

    Warsito, Budi; Yasin, Hasbi; Ispriyanti, Dwi; Hoyyi, Abdul

    2018-05-01

    The Geographically Weighted Regression (GWR) model has been widely applied to many practical fields for exploring spatial heterogenity of a regression model. However, this method is inherently not robust to outliers. Outliers commonly exist in data sets and may lead to a distorted estimate of the underlying regression model. One of solution to handle the outliers in the regression model is to use the robust models. So this model was called Robust Geographically Weighted Regression (RGWR). This research aims to aid the government in the policy making process related to air pollution mitigation by developing a standard index model for air polluter (Air Polluter Standard Index - APSI) based on the RGWR approach. In this research, we also consider seven variables that are directly related to the air pollution level, which are the traffic velocity, the population density, the business center aspect, the air humidity, the wind velocity, the air temperature, and the area size of the urban forest. The best model is determined by the smallest AIC value. There are significance differences between Regression and RGWR in this case, but Basic GWR using the Gaussian kernel is the best model to modeling APSI because it has smallest AIC.

  20. Adaptive Laguerre-Gaussian variant of the Gaussian beam expansion method.

    Science.gov (United States)

    Cagniot, Emmanuel; Fromager, Michael; Ait-Ameur, Kamel

    2009-11-01

    A variant of the Gaussian beam expansion method consists in expanding the Bessel function J0 appearing in the Fresnel-Kirchhoff integral into a finite sum of complex Gaussian functions to derive an analytical expression for a Laguerre-Gaussian beam diffracted through a hard-edge aperture. However, the validity range of the approximation depends on the number of expansion coefficients that are obtained by optimization-computation directly. We propose another solution consisting in expanding J0 onto a set of collimated Laguerre-Gaussian functions whose waist depends on their number and then, depending on its argument, predicting the suitable number of expansion functions to calculate the integral recursively.

  1. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  2. A Decentralized Receiver in Gaussian Interference

    Directory of Open Access Journals (Sweden)

    Christian D. Chapman

    2018-04-01

    Full Text Available Bounds are developed on the maximum communications rate between a transmitter and a fusion node aided by a cluster of distributed receivers with limited resources for cooperation, all in the presence of an additive Gaussian interferer. The receivers cannot communicate with one another and can only convey processed versions of their observations to the fusion center through a Local Array Network (LAN with limited total throughput. The effectiveness of each bound’s approach for mitigating a strong interferer is assessed over a wide range of channels. It is seen that, if resources are shared effectively, even a simple quantize-and-forward strategy can mitigate an interferer 20 dB stronger than the signal in a diverse range of spatially Ricean channels. Monte-Carlo experiments for the bounds reveal that, while achievable rates are stable when varying the receiver’s observed scattered-path to line-of-sight signal power, the receivers must adapt how they share resources in response to this change. The bounds analyzed are proven to be achievable and are seen to be tight with capacity when LAN resources are either ample or limited.

  3. Nonlinear and non-Gaussian Bayesian based handwriting beautification

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2013-03-01

    A framework is proposed in this paper to effectively and efficiently beautify handwriting by means of a novel nonlinear and non-Gaussian Bayesian algorithm. In the proposed framework, format and size of handwriting image are firstly normalized, and then typeface in computer system is applied to optimize vision effect of handwriting. The Bayesian statistics is exploited to characterize the handwriting beautification process as a Bayesian dynamic model. The model parameters to translate, rotate and scale typeface in computer system are controlled by state equation, and the matching optimization between handwriting and transformed typeface is employed by measurement equation. Finally, the new typeface, which is transformed from the original one and gains the best nonlinear and non-Gaussian optimization, is the beautification result of handwriting. Experimental results demonstrate the proposed framework provides a creative handwriting beautification methodology to improve visual acceptance.

  4. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  5. New gaussian points for the solution of first order ordinary ...

    African Journals Online (AJOL)

    Numerical experiments carried out using the new Gaussian points revealed there efficiency on stiff differential equations. The results also reveal that methods using the new Gaussian points are more accurate than those using the standard Gaussian points on non-stiff initial value problems. Keywords: Gaussian points ...

  6. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  7. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  8. Abstract Expression Grammar Symbolic Regression

    Science.gov (United States)

    Korns, Michael F.

    This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

  9. Towards smart energy systems: application of kernel machine regression for medium term electricity load forecasting.

    Science.gov (United States)

    Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H

    2016-01-01

    Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.

  10. Graphical calculus for Gaussian pure states

    International Nuclear Information System (INIS)

    Menicucci, Nicolas C.; Flammia, Steven T.; Loock, Peter van

    2011-01-01

    We provide a unified graphical calculus for all Gaussian pure states, including graph transformation rules for all local and semilocal Gaussian unitary operations, as well as local quadrature measurements. We then use this graphical calculus to analyze continuous-variable (CV) cluster states, the essential resource for one-way quantum computing with CV systems. Current graphical approaches to CV cluster states are only valid in the unphysical limit of infinite squeezing, and the associated graph transformation rules only apply when the initial and final states are of this form. Our formalism applies to all Gaussian pure states and subsumes these rules in a natural way. In addition, the term 'CV graph state' currently has several inequivalent definitions in use. Using this formalism we provide a single unifying definition that encompasses all of them. We provide many examples of how the formalism may be used in the context of CV cluster states: defining the 'closest' CV cluster state to a given Gaussian pure state and quantifying the error in the approximation due to finite squeezing; analyzing the optimality of certain methods of generating CV cluster states; drawing connections between this graphical formalism and bosonic Hamiltonians with Gaussian ground states, including those useful for CV one-way quantum computing; and deriving a graphical measure of bipartite entanglement for certain classes of CV cluster states. We mention other possible applications of this formalism and conclude with a brief note on fault tolerance in CV one-way quantum computing.

  11. Variational Gaussian approximation for Poisson data

    Science.gov (United States)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  12. Mode entanglement of Gaussian fermionic states

    Science.gov (United States)

    Spee, C.; Schwaiger, K.; Giedke, G.; Kraus, B.

    2018-04-01

    We investigate the entanglement of n -mode n -partite Gaussian fermionic states (GFS). First, we identify a reasonable definition of separability for GFS and derive a standard form for mixed states, to which any state can be mapped via Gaussian local unitaries (GLU). As the standard form is unique, two GFS are equivalent under GLU if and only if their standard forms coincide. Then, we investigate the important class of local operations assisted by classical communication (LOCC). These are central in entanglement theory as they allow one to partially order the entanglement contained in states. We show, however, that there are no nontrivial Gaussian LOCC (GLOCC) among pure n -partite (fully entangled) states. That is, any such GLOCC transformation can also be accomplished via GLU. To obtain further insight into the entanglement properties of such GFS, we investigate the richer class of Gaussian stochastic local operations assisted by classical communication (SLOCC). We characterize Gaussian SLOCC classes of pure n -mode n -partite states and derive them explicitly for few-mode states. Furthermore, we consider certain fermionic LOCC and show how to identify the maximally entangled set of pure n -mode n -partite GFS, i.e., the minimal set of states having the property that any other state can be obtained from one state inside this set via fermionic LOCC. We generalize these findings also to the pure m -mode n -partite (for m >n ) case.

  13. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  14. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  15. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  16. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States

    Science.gov (United States)

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-01

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  17. Efficient Hardware Implementation For Fingerprint Image Enhancement Using Anisotropic Gaussian Filter.

    Science.gov (United States)

    Khan, Tariq Mahmood; Bailey, Donald G; Khan, Mohammad A U; Kong, Yinan

    2017-05-01

    A real-time image filtering technique is proposed which could result in faster implementation for fingerprint image enhancement. One major hurdle associated with fingerprint filtering techniques is the expensive nature of their hardware implementations. To circumvent this, a modified anisotropic Gaussian filter is efficiently adopted in hardware by decomposing the filter into two orthogonal Gaussians and an oriented line Gaussian. An architecture is developed for dynamically controlling the orientation of the line Gaussian filter. To further improve the performance of the filter, the input image is homogenized by a local image normalization. In the proposed structure, for a middle-range reconfigurable FPGA, both parallel compute-intensive and real-time demands were achieved. We manage to efficiently speed up the image-processing time and improve the resource utilization of the FPGA. Test results show an improved speed for its hardware architecture while maintaining reasonable enhancement benchmarks.

  18. Non-Gaussianity in island cosmology

    International Nuclear Information System (INIS)

    Piao Yunsong

    2009-01-01

    In this paper we fully calculate the non-Gaussianity of primordial curvature perturbation of the island universe by using the second order perturbation equation. We find that for the spectral index n s ≅0.96, which is favored by current observations, the non-Gaussianity level f NL seen in an island will generally lie between 30 and 60, which may be tested by the coming observations. In the landscape, the island universe is one of anthropically acceptable cosmological histories. Thus the results obtained in some sense mean the coming observations, especially the measurement of non-Gaussianity, will be significant to clarify how our position in the landscape is populated.

  19. Entanglement negativity bounds for fermionic Gaussian states

    Science.gov (United States)

    Eisert, Jens; Eisler, Viktor; Zimborás, Zoltán

    2018-04-01

    The entanglement negativity is a versatile measure of entanglement that has numerous applications in quantum information and in condensed matter theory. It can not only efficiently be computed in the Hilbert space dimension, but for noninteracting bosonic systems, one can compute the negativity efficiently in the number of modes. However, such an efficient computation does not carry over to the fermionic realm, the ultimate reason for this being that the partial transpose of a fermionic Gaussian state is no longer Gaussian. To provide a remedy for this state of affairs, in this work, we introduce efficiently computable and rigorous upper and lower bounds to the negativity, making use of techniques of semidefinite programming, building upon the Lagrangian formulation of fermionic linear optics, and exploiting suitable products of Gaussian operators. We discuss examples in quantum many-body theory and hint at applications in the study of topological properties at finite temperature.

  20. Bridging asymptotic independence and dependence in spatial exbtremes using Gaussian scale mixtures

    KAUST Repository

    Huser, Raphaël

    2017-06-23

    Gaussian scale mixtures are constructed as Gaussian processes with a random variance. They have non-Gaussian marginals and can exhibit asymptotic dependence unlike Gaussian processes, which are asymptotically independent except in the case of perfect dependence. In this paper, we study the extremal dependence properties of Gaussian scale mixtures and we unify and extend general results on their joint tail decay rates in both asymptotic dependence and independence cases. Motivated by the analysis of spatial extremes, we propose flexible yet parsimonious parametric copula models that smoothly interpolate from asymptotic dependence to independence and include the Gaussian dependence as a special case. We show how these new models can be fitted to high threshold exceedances using a censored likelihood approach, and we demonstrate that they provide valuable information about tail characteristics. In particular, by borrowing strength across locations, our parametric model-based approach can also be used to provide evidence for or against either asymptotic dependence class, hence complementing information given at an exploratory stage by the widely used nonparametric or parametric estimates of the χ and χ̄ coefficients. We demonstrate the capacity of our methodology by adequately capturing the extremal properties of wind speed data collected in the Pacific Northwest, US.

  1. Bridging asymptotic independence and dependence in spatial exbtremes using Gaussian scale mixtures

    KAUST Repository

    Huser, Raphaë l; Opitz, Thomas; Thibaud, Emeric

    2017-01-01

    Gaussian scale mixtures are constructed as Gaussian processes with a random variance. They have non-Gaussian marginals and can exhibit asymptotic dependence unlike Gaussian processes, which are asymptotically independent except in the case of perfect dependence. In this paper, we study the extremal dependence properties of Gaussian scale mixtures and we unify and extend general results on their joint tail decay rates in both asymptotic dependence and independence cases. Motivated by the analysis of spatial extremes, we propose flexible yet parsimonious parametric copula models that smoothly interpolate from asymptotic dependence to independence and include the Gaussian dependence as a special case. We show how these new models can be fitted to high threshold exceedances using a censored likelihood approach, and we demonstrate that they provide valuable information about tail characteristics. In particular, by borrowing strength across locations, our parametric model-based approach can also be used to provide evidence for or against either asymptotic dependence class, hence complementing information given at an exploratory stage by the widely used nonparametric or parametric estimates of the χ and χ̄ coefficients. We demonstrate the capacity of our methodology by adequately capturing the extremal properties of wind speed data collected in the Pacific Northwest, US.

  2. Invariant measures on multimode quantum Gaussian states

    Science.gov (United States)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-12-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  3. Invariant measures on multimode quantum Gaussian states

    International Nuclear Information System (INIS)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-01-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  4. Invariant measures on multimode quantum Gaussian states

    Energy Technology Data Exchange (ETDEWEB)

    Lupo, C. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Mancini, S. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); De Pasquale, A. [NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa (Italy); Facchi, P. [Dipartimento di Matematica and MECENAS, Universita di Bari, I-70125 Bari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Florio, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, Piazza del Viminale 1, I-00184 Roma (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy); Pascazio, S. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy)

    2012-12-15

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom-the symplectic eigenvalues-which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  5. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  6. Integration of non-Gaussian fields

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Mohr, Gunnar; Hoffmeyer, Pernille

    1996-01-01

    The limitations of the validity of the central limit theorem argument as applied to definite integrals of non-Gaussian random fields are empirically explored by way of examples. The purpose is to investigate in specific cases whether the asymptotic convergence to the Gaussian distribution is fast....... and Randrup-Thomsen, S. Reliability of silo ring under lognormal stochastic pressure using stochastic interpolation. Proc. IUTAM Symp., Probabilistic Structural Mechanics: Advances in Structural Reliability Methods, San Antonio, TX, USA, June 1993 (eds.: P. D. Spanos & Y.-T. Wu) pp. 134-162. Springer, Berlin...

  7. Quantum information theory with Gaussian systems

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, O.

    2006-04-06

    This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

  8. Quantum information theory with Gaussian systems

    International Nuclear Information System (INIS)

    Krueger, O.

    2006-01-01

    This thesis applies ideas and concepts from quantum information theory to systems of continuous-variables such as the quantum harmonic oscillator. The focus is on three topics: the cloning of coherent states, Gaussian quantum cellular automata and Gaussian private channels. Cloning was investigated both for finite-dimensional and for continuous-variable systems. We construct a private quantum channel for the sequential encryption of coherent states with a classical key, where the key elements have finite precision. For the case of independent one-mode input states, we explicitly estimate this precision, i.e. the number of key bits needed per input state, in terms of these parameters. (orig.)

  9. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  10. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  11. GaussianCpG: a Gaussian model for detection of CpG island in human genome sequences.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Zelikovsky, Alexander; Pan, Yi

    2017-05-24

    As crucial markers in identifying biological elements and processes in mammalian genomes, CpG islands (CGI) play important roles in DNA methylation, gene regulation, epigenetic inheritance, gene mutation, chromosome inactivation and nuclesome retention. The generally accepted criteria of CGI rely on: (a) %G+C content is ≥ 50%, (b) the ratio of the observed CpG content and the expected CpG content is ≥ 0.6, and (c) the general length of CGI is greater than 200 nucleotides. Most existing computational methods for the prediction of CpG island are programmed on these rules. However, many experimentally verified CpG islands deviate from these artificial criteria. Experiments indicate that in many cases %G+C is human genome. We analyze the energy distribution over genomic primary structure for each CpG site and adopt the parameters from statistics of Human genome. The evaluation results show that the new model can predict CpG islands efficiently by balancing both sensitivity and specificity over known human CGI data sets. Compared with other models, GaussianCpG can achieve better performance in CGI detection. Our Gaussian model aims to simplify the complex interaction between nucleotides. The model is computed not by the linear statistical method but by the Gaussian energy distribution and accumulation. The parameters of Gaussian function are not arbitrarily designated but deliberately chosen by optimizing the biological statistics. By using the pseudopotential analysis on CpG islands, the novel model is validated on both the real and artificial data sets.

  12. Optimal Inference for Instrumental Variables Regression with non-Gaussian Errors

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness...

  13. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  14. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  15. SNR Estimation in Linear Systems with Gaussian Matrices

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Alrashdi, Ayed; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2017-01-01

    This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.

  16. SNR Estimation in Linear Systems with Gaussian Matrices

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-09-27

    This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.

  17. Understanding logistic regression analysis.

    Science.gov (United States)

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  18. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  19. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  20. How Gaussian can our Universe be?

    Science.gov (United States)

    Cabass, G.; Pajer, E.; Schmidt, F.

    2017-01-01

    Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of kl2/ks2, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × (ns-1).

  1. Gaussian vector fields on triangulated surfaces

    DEFF Research Database (Denmark)

    Ipsen, John H

    2016-01-01

    proven to be very useful to resolve the complex interplay between in-plane ordering of membranes and membrane conformations. In the present work we have developed a procedure for realistic representations of Gaussian models with in-plane vector degrees of freedoms on a triangulated surface. The method...

  2. The Wehrl entropy has Gaussian optimizers

    DEFF Research Database (Denmark)

    De Palma, Giacomo

    2018-01-01

    We determine the minimum Wehrl entropy among the quantum states with a given von Neumann entropy and prove that it is achieved by thermal Gaussian states. This result determines the relation between the von Neumann and the Wehrl entropies. The key idea is proving that the quantum-classical channel...

  3. How Gaussian can our Universe be?

    Energy Technology Data Exchange (ETDEWEB)

    Cabass, G. [Physics Department and INFN, Università di Roma ' ' La Sapienza' ' , P.le Aldo Moro 2, 00185, Rome (Italy); Pajer, E. [Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Schmidt, F., E-mail: giovanni.cabass@roma1.infn.it, E-mail: e.pajer@uu.nl, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-01-01

    Gravity is a non-linear theory, and hence, barring cancellations, the initial super-horizon perturbations produced by inflation must contain some minimum amount of mode coupling, or primordial non-Gaussianity. In single-field slow-roll models, where this lower bound is saturated, non-Gaussianity is controlled by two observables: the tensor-to-scalar ratio, which is uncertain by more than fifty orders of magnitude; and the scalar spectral index, or tilt, which is relatively well measured. It is well known that to leading and next-to-leading order in derivatives, the contributions proportional to the tilt disappear from any local observable, and suspicion has been raised that this might happen to all orders, allowing for an arbitrarily low amount of primordial non-Gaussianity. Employing Conformal Fermi Coordinates, we show explicitly that this is not the case. Instead, a contribution of order the tilt appears in local observables. In summary, the floor of physical primordial non-Gaussianity in our Universe has a squeezed-limit scaling of k {sub ℓ}{sup 2}/ k {sub s} {sup 2}, similar to equilateral and orthogonal shapes, and a dimensionless amplitude of order 0.1 × ( n {sub s}−1).

  4. Gaussian shaping filter for nuclear spectrometry

    International Nuclear Information System (INIS)

    Menezes, A.S.C. de.

    1980-01-01

    A theorical study of a gaussian shaping filter, using Pade approximation, for using in gamma spectroscopy is presented. This approximation has proved superior to the classical cascade RC integrators approximation in therms of signal-to-noise ratio and pulse simmetry. An experimental filter was designed, simulated in computer, constructed, and tested in the laboratory. (author) [pt

  5. Asymptotic expansions for the Gaussian unitary ensemble

    DEFF Research Database (Denmark)

    Haagerup, Uffe; Thorbjørnsen, Steen

    2012-01-01

    Let g : R ¿ C be a C8-function with all derivatives bounded and let trn denote the normalized trace on the n × n matrices. In Ref. 3 Ercolani and McLaughlin established asymptotic expansions of the mean value ¿{trn(g(Xn))} for a rather general class of random matrices Xn, including the Gaussian U...

  6. Chimera states in Gaussian coupled map lattices

    Science.gov (United States)

    Li, Xiao-Wen; Bi, Ran; Sun, Yue-Xiang; Zhang, Shuo; Song, Qian-Qian

    2018-04-01

    We study chimera states in one-dimensional and two-dimensional Gaussian coupled map lattices through simulations and experiments. Similar to the case of global coupling oscillators, individual lattices can be regarded as being controlled by a common mean field. A space-dependent order parameter is derived from a self-consistency condition in order to represent the collective state.

  7. Gaussian curvature on hyperelliptic Riemann surfaces

    Indian Academy of Sciences (India)

    Indian Acad. Sci. (Math. Sci.) Vol. 124, No. 2, May 2014, pp. 155–167. c Indian Academy of Sciences. Gaussian curvature on hyperelliptic Riemann surfaces. ABEL CASTORENA. Centro de Ciencias Matemáticas (Universidad Nacional Autónoma de México,. Campus Morelia) Apdo. Postal 61-3 Xangari, C.P. 58089 Morelia,.

  8. Additivity properties of a Gaussian channel

    International Nuclear Information System (INIS)

    Giovannetti, Vittorio; Lloyd, Seth

    2004-01-01

    The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment

  9. Modeling text with generalizable Gaussian mixtures

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Sigurdsson, Sigurdur; Kolenda, Thomas

    2000-01-01

    We apply and discuss generalizable Gaussian mixture (GGM) models for text mining. The model automatically adapts model complexity for a given text representation. We show that the generalizability of these models depends on the dimensionality of the representation and the sample size. We discuss...

  10. Improving the gaussian effective potential: quantum mechanics

    International Nuclear Information System (INIS)

    Eboli, O.J.P.; Thomaz, M.T.; Lemos, N.A.

    1990-08-01

    In order to gain intuition for variational problems in field theory, we analyze variationally the quantum-mechanical anharmonic oscillator [(V(x)sup(k) - sub(2) x sup(2) + sup(λ) - sub(4) λ sup(4)]. Special attention is paid to improvements to the Gaussian effective potential. (author)

  11. Comparison of halo detection from noisy weak lensing convergence maps with Gaussian smoothing and MRLens treatment

    International Nuclear Information System (INIS)

    Jiao Yangxiu; Shan Huanyuan; Fan Zuhui

    2011-01-01

    Taking into account the noise from intrinsic ellipticities of source galaxies, we study the efficiency and completeness of halo detections from weak lensing convergence maps. Particularly, with numerical simulations, we compare the Gaussian filter with the so called MRLens treatment based on the modification of the Maximum Entropy Method. For a pure noise field without lensing signals, a Gaussian smoothing results in a residual noise field that is approximately Gaussian in terms of statistics if a large enough number of galaxies are included in the smoothing window. On the other hand, the noise field after the MRLens treatment is significantly non-Gaussian, resulting in complications in characterizing the noise effects. Considering weak-lensing cluster detections, although the MRLens treatment effectively deletes false peaks arising from noise, it removes the real peaks heavily due to its inability to distinguish real signals with relatively low amplitudes from noise in its restoration process. The higher the noise level is, the larger the removal effects are for the real peaks. For a survey with a source density n g ∼ 30 arcmin -2 , the number of peaks found in an area of 3 x 3 deg 2 after MRLens filtering is only ∼ 50 for the detection threshold κ = 0.02, while the number of halos with M > 5 x 10 13 M circleddot and with redshift z ≤ 2 in the same area is expected to be ∼ 530. For the Gaussian smoothing treatment, the number of detections is ∼ 260, much larger than that of the MRLens. The Gaussianity of the noise statistics in the Gaussian smoothing case adds further advantages for this method to circumvent the problem of the relatively low efficiency in weak-lensing cluster detections. Therefore, in studies aiming to construct large cluster samples from weak-lensing surveys, the Gaussian smoothing method performs significantly better than the MRLens treatment.

  12. Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments

    International Nuclear Information System (INIS)

    Luo, X.

    1994-01-01

    Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments

  13. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  14. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  15. Estimators for local non-Gaussianities

    International Nuclear Information System (INIS)

    Creminelli, P.; Senatore, L.; Zaldarriaga, M.

    2006-05-01

    We study the Likelihood function of data given f NL for the so-called local type of non-Gaussianity. In this case the curvature perturbation is a non-linear function, local in real space, of a Gaussian random field. We compute the Cramer-Rao bound for f NL and show that for small values of f NL the 3- point function estimator saturates the bound and is equivalent to calculating the full Likelihood of the data. However, for sufficiently large f NL , the naive 3-point function estimator has a much larger variance than previously thought. In the limit in which the departure from Gaussianity is detected with high confidence, error bars on f NL only decrease as 1/ln N pix rather than N pix -1/2 as the size of the data set increases. We identify the physical origin of this behavior and explain why it only affects the local type of non- Gaussianity, where the contribution of the first multipoles is always relevant. We find a simple improvement to the 3-point function estimator that makes the square root of its variance decrease as N pix -1/2 even for large f NL , asymptotically approaching the Cramer-Rao bound. We show that using the modified estimator is practically equivalent to computing the full Likelihood of f NL given the data. Thus other statistics of the data, such as the 4-point function and Minkowski functionals, contain no additional information on f NL . In particular, we explicitly show that the recent claims about the relevance of the 4-point function are not correct. By direct inspection of the Likelihood, we show that the data do not contain enough information for any statistic to be able to constrain higher order terms in the relation between the Gaussian field and the curvature perturbation, unless these are orders of magnitude larger than the size suggested by the current limits on f NL . (author)

  16. Cosmological information in Gaussianized weak lensing signals

    Science.gov (United States)

    Joachimi, B.; Taylor, A. N.; Kiessling, A.

    2011-11-01

    Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non

  17. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  18. Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables

    Science.gov (United States)

    Barnett, Lionel; Barrett, Adam B.; Seth, Anil K.

    2009-12-01

    Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.

  19. Compact Gaussian quantum computation by multi-pixel homodyne detection

    International Nuclear Information System (INIS)

    Ferrini, G; Fabre, C; Treps, N; Gazeau, J P; Coudreau, T

    2013-01-01

    We study the possibility of producing and detecting continuous variable cluster states in an extremely compact optical setup. This method is based on a multi-pixel homodyne detection system recently demonstrated experimentally, which includes classical data post-processing. It allows the incorporation of the linear optics network, usually employed in standard experiments for the production of cluster states, in the stage of the measurement. After giving an example of cluster state generation by this method, we further study how this procedure can be generalized to perform Gaussian quantum computation. (paper)

  20. Fluctuation relations with intermittent non-Gaussian variables.

    Science.gov (United States)

    Budini, Adrián A

    2011-12-01

    Nonequilibrium stationary fluctuations may exhibit a special symmetry called fluctuation relations (FRs). Here, we show that this property is always satisfied by the subtraction of two random and independent variables related by a thermodynamiclike change of measure. Taking one of them as a modulated Poisson process, it is demonstrated that intermittence and FRs are compatible properties that may coexist naturally. Strong non-Gaussian features characterize the probability distribution and its generating function. Their associated large deviation functions develop a "kink" at the origin and a plateau regime respectively. Application of this model in different stationary nonequilibrium situations is discussed.

  1. Generalized Fokker-Planck equations for coloured, multiplicative Gaussian noise

    International Nuclear Information System (INIS)

    Cetto, A.M.; Pena, L. de la; Velasco, R.M.

    1984-01-01

    With the help of Novikov's theorem, it is possible to derive a master equation for a coloured, multiplicative, Gaussian random process; the coefficients of this master equation satisfy a complicated auxiliary integro-differential equation. For small values of the Kubo number, the master equation reduces to an approximate generalized Fokker-Planck equation. The diffusion coefficient is explicitly written in terms of correlation functions. Finally, a straightforward and elementary second order perturbative treatment is proposed to derive the same approximate Fokker-Planck equation. (author)

  2. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  3. Wind Power Ramp Events Prediction with Hybrid Machine Learning Regression Techniques and Reanalysis Data

    Directory of Open Access Journals (Sweden)

    Laura Cornejo-Bueno

    2017-11-01

    Full Text Available Wind Power Ramp Events (WPREs are large fluctuations of wind power in a short time interval, which lead to strong, undesirable variations in the electric power produced by a wind farm. Its accurate prediction is important in the effort of efficiently integrating wind energy in the electric system, without affecting considerably its stability, robustness and resilience. In this paper, we tackle the problem of predicting WPREs by applying Machine Learning (ML regression techniques. Our approach consists of using variables from atmospheric reanalysis data as predictive inputs for the learning machine, which opens the possibility of hybridizing numerical-physical weather models with ML techniques for WPREs prediction in real systems. Specifically, we have explored the feasibility of a number of state-of-the-art ML regression techniques, such as support vector regression, artificial neural networks (multi-layer perceptrons and extreme learning machines and Gaussian processes to solve the problem. Furthermore, the ERA-Interim reanalysis from the European Center for Medium-Range Weather Forecasts is the one used in this paper because of its accuracy and high resolution (in both spatial and temporal domains. Aiming at validating the feasibility of our predicting approach, we have carried out an extensive experimental work using real data from three wind farms in Spain, discussing the performance of the different ML regression tested in this wind power ramp event prediction problem.

  4. Phase space structure of generalized Gaussian cat states

    International Nuclear Information System (INIS)

    Nicacio, Fernando; Maia, Raphael N.P.; Toscano, Fabricio; Vallejos, Raul O.

    2010-01-01

    We analyze generalized Gaussian cat states obtained by superposing arbitrary Gaussian states. The structure of the interference term of the Wigner function is always hyperbolic, surviving the action of a thermal reservoir. We also consider certain superpositions of mixed Gaussian states. An application to semiclassical dynamics is discussed.

  5. Linking network usage patterns to traffic Gaussianity fit

    NARCIS (Netherlands)

    de Oliveira Schmidt, R.; Sadre, R.; Melnikov, Nikolay; Schönwälder, Jürgen; Pras, Aiko

    Gaussian traffic models are widely used in the domain of network traffic modeling. The central assumption is that traffic aggregates are Gaussian distributed. Due to its importance, the Gaussian character of network traffic has been extensively assessed by researchers in the past years. In 2001,

  6. Multicollinearity and Regression Analysis

    Science.gov (United States)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  7. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  8. Minimax Regression Quantiles

    DEFF Research Database (Denmark)

    Bache, Stefan Holst

    A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

  9. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  10. On Alternate Relaying with Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed

    2016-06-06

    In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.

  11. On Alternate Relaying with Improper Gaussian Signaling

    KAUST Repository

    Gaafar, Mohamed; Amin, Osama; Ikhlef, Aissa; Chaaban, Anas; Alouini, Mohamed-Slim

    2016-01-01

    In this letter, we investigate the potential benefits of adopting improper Gaussian signaling (IGS) in a two-hop alternate relaying (AR) system. Given the known benefits of using IGS in interference-limited networks, we propose to use IGS to relieve the inter-relay interference (IRI) impact on the AR system assuming no channel state information is available at the source. In this regard, we assume that the two relays use IGS and the source uses proper Gaussian signaling (PGS). Then, we optimize the degree of impropriety of the relays signal, measured by the circularity coefficient, to maximize the total achievable rate. Simulation results show that using IGS yields a significant performance improvement over PGS, especially when the first hop is a bottleneck due to weak source-relay channel gains and/or strong IRI.

  12. Fractional Diffusion in Gaussian Noisy Environment

    Directory of Open Access Journals (Sweden)

    Guannan Hu

    2015-03-01

    Full Text Available We study the fractional diffusion in a Gaussian noisy environment as described by the fractional order stochastic heat equations of the following form: \\(D_t^{(\\alpha} u(t, x=\\textit{B}u+u\\cdot \\dot W^H\\, where \\(D_t^{(\\alpha}\\ is the Caputo fractional derivative of order \\(\\alpha\\in (0,1\\ with respect to the time variable \\(t\\, \\(\\textit{B}\\ is a second order elliptic operator with respect to the space variable \\(x\\in\\mathbb{R}^d\\ and \\(\\dot W^H\\ a time homogeneous fractional Gaussian noise of Hurst parameter \\(H=(H_1, \\cdots, H_d\\. We obtain conditions satisfied by \\(\\alpha\\ and \\(H\\, so that the square integrable solution \\(u\\ exists uniquely.

  13. Interweave Cognitive Radio with Improper Gaussian Signaling

    KAUST Repository

    Hedhly, Wafa

    2018-01-15

    Improper Gaussian signaling (IGS) has proven its ability in improving the performance of underlay and overlay cognitive radio paradigms. In this paper, the interweave cognitive radio paradigm is studied when the cognitive user employs IGS. The instantaneous achievable rate performance of both the primary and secondary users are analyzed for specific secondary user sensing and detection capabilities. Next, the IGS scheme is optimized to maximize the achievable rate secondary user while satisfying a target minimum rate requirement for the primary user. Proper Gaussian signaling (PGS) scheme design is also derived to be used as benchmark of the IGS scheme design. Finally, different numerical results are introduced to show the gain reaped from adopting IGS over PGS under different system parameters. The main advantage of employing IGS is observed at low sensing and detection capabilities of the SU, lower PU direct link and higher SU interference on the PU side.

  14. A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yong, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn; Tao, Gang, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn [School of Energy and Power Engineering, Nanjing University of Science and Technology, 200 XiaoLingwei Street, Nanjing 210094 (China)

    2014-09-01

    The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.

  15. A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation.

    Science.gov (United States)

    Huang, Yong; Tao, Gang

    2014-09-01

    The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.

  16. Non-Markovianity of Gaussian Channels.

    Science.gov (United States)

    Torre, G; Roga, W; Illuminati, F

    2015-08-14

    We introduce a necessary and sufficient criterion for the non-Markovianity of Gaussian quantum dynamical maps based on the violation of divisibility. The criterion is derived by defining a general vectorial representation of the covariance matrix which is then exploited to determine the condition for the complete positivity of partial maps associated with arbitrary time intervals. Such construction does not rely on the Choi-Jamiolkowski representation and does not require optimization over states.

  17. Recognition of Images Degraded by Gaussian Blur

    Czech Academy of Sciences Publication Activity Database

    Flusser, Jan; Farokhi, Sajad; Höschl, Cyril; Suk, Tomáš; Zitová, Barbara; Pedone, M.

    2016-01-01

    Roč. 25, č. 2 (2016), s. 790-806 ISSN 1057-7149 R&D Projects: GA ČR(CZ) GA15-16928S Institutional support: RVO:67985556 Keywords : Blurred image * object recognition * blur invariant comparison * Gaussian blur * projection operators * image moments * moment invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0454335.pdf

  18. Multiple linear regression analysis

    Science.gov (United States)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  19. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  20. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  1. Nonlinear Regression with R

    CERN Document Server

    Ritz, Christian; Parmigiani, Giovanni

    2009-01-01

    R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

  2. and Multinomial Logistic Regression

    African Journals Online (AJOL)

    This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

  3. Mechanisms of neuroblastoma regression

    Science.gov (United States)

    Brodeur, Garrett M.; Bagatell, Rochelle

    2014-01-01

    Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

  4. Resonant non-Gaussianity with equilateral properties

    International Nuclear Information System (INIS)

    Gwyn, Rhiannon; Rummel, Markus

    2012-11-01

    We discuss the effect of superimposing multiple sources of resonant non-Gaussianity, which arise for instance in models of axion inflation. The resulting sum of oscillating shape contributions can be used to ''Fourier synthesize'' different non-oscillating shapes in the bispectrum. As an example we reproduce an approximately equilateral shape from the superposition of O(10) oscillatory contributions with resonant shape. This implies a possible degeneracy between the equilateral-type non-Gaussianity typical of models with non-canonical kinetic terms, such as DBI inflation, and an equilateral-type shape arising from a superposition of resonant-type contributions in theories with canonical kinetic terms. The absence of oscillations in the 2-point function together with the structure of the resonant N-point functions, imply that detection of equilateral non-Gaussianity at a level greater than the PLANCK sensitivity of f NL ∝O(5) will rule out a resonant origin. We comment on the questions arising from possible embeddings of this idea in a string theory setting.

  5. Unitarily localizable entanglement of Gaussian states

    International Nuclear Information System (INIS)

    Serafini, Alessio; Adesso, Gerardo; Illuminati, Fabrizio

    2005-01-01

    We consider generic (mxn)-mode bipartitions of continuous-variable systems, and study the associated bisymmetric multimode Gaussian states. They are defined as (m+n)-mode Gaussian states invariant under local mode permutations on the m-mode and n-mode subsystems. We prove that such states are equivalent, under local unitary transformations, to the tensor product of a two-mode state and of m+n-2 uncorrelated single-mode states. The entanglement between the m-mode and the n-mode blocks can then be completely concentrated on a single pair of modes by means of local unitary operations alone. This result allows us to prove that the PPT (positivity of the partial transpose) condition is necessary and sufficient for the separability of (m+n)-mode bisymmetric Gaussian states. We determine exactly their negativity and identify a subset of bisymmetric states whose multimode entanglement of formation can be computed analytically. We consider explicit examples of pure and mixed bisymmetric states and study their entanglement scaling with the number of modes

  6. Gaussian Hypothesis Testing and Quantum Illumination.

    Science.gov (United States)

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  7. Resonant non-Gaussianity with equilateral properties

    Energy Technology Data Exchange (ETDEWEB)

    Gwyn, Rhiannon [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany); Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-11-15

    We discuss the effect of superimposing multiple sources of resonant non-Gaussianity, which arise for instance in models of axion inflation. The resulting sum of oscillating shape contributions can be used to ''Fourier synthesize'' different non-oscillating shapes in the bispectrum. As an example we reproduce an approximately equilateral shape from the superposition of O(10) oscillatory contributions with resonant shape. This implies a possible degeneracy between the equilateral-type non-Gaussianity typical of models with non-canonical kinetic terms, such as DBI inflation, and an equilateral-type shape arising from a superposition of resonant-type contributions in theories with canonical kinetic terms. The absence of oscillations in the 2-point function together with the structure of the resonant N-point functions, imply that detection of equilateral non-Gaussianity at a level greater than the PLANCK sensitivity of f{sub NL} {proportional_to}O(5) will rule out a resonant origin. We comment on the questions arising from possible embeddings of this idea in a string theory setting.

  8. Non-Gaussian conductivity fluctuations in semiconductors

    International Nuclear Information System (INIS)

    Melkonyan, S.V.

    2010-01-01

    A theoretical study is presented on the statistical properties of conductivity fluctuations caused by concentration and mobility fluctuations of the current carriers. It is established that mobility fluctuations result from random deviations in the thermal equilibrium distribution of the carriers. It is shown that mobility fluctuations have generation-recombination and shot components which do not satisfy the requirements of the central limit theorem, in contrast to the current carrier's concentration fluctuation and intraband component of the mobility fluctuation. It is shown that in general the mobility fluctuation consist of thermal (or intraband) Gaussian and non-thermal (or generation-recombination, shot, etc.) non-Gaussian components. The analyses of theoretical results and experimental data from literature show that the statistical properties of mobility fluctuation and of 1/f-noise fully coincide. The deviation from Gaussian statistics of the mobility or 1/f fluctuations goes hand in hand with the magnitude of non-thermal noise (generation-recombination, shot, burst, pulse noises, etc.).

  9. Perturbative Gaussianizing transforms for cosmological fields

    Science.gov (United States)

    Hall, Alex; Mead, Alexander

    2018-01-01

    Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.

  10. Image illumination enhancement with an objective no-reference measure of illumination assessment based on Gaussian distribution mapping

    Directory of Open Access Journals (Sweden)

    Gholamreza Anbarjafari

    2015-12-01

    Full Text Available Illumination problems have been an important concern in many image processing applications. The pattern of the histogram on an image introduces meaningful features; hence within the process of illumination enhancement, it is important not to destroy such information. In this paper we propose a method to enhance image illumination using Gaussian distribution mapping which also keeps the information laid on the pattern of the histogram on the original image. First a Gaussian distribution based on the mean and standard deviation of the input image will be calculated. Simultaneously a Gaussian distribution with the desired mean and standard deviation will be calculated. Then a cumulative distribution function of each of the Gaussian distributions will be calculated and used in order to map the old pixel value onto the new pixel value. Another important issue in the field of illumination enhancement is absence of a quantitative measure for the assessment of the illumination of an image. In this research work, a quantitative measure indicating the illumination state, i.e. contrast level and brightness of an image, is also proposed. The measure utilizes the estimated Gaussian distribution of the input image and the Kullback-Leibler Divergence (KLD between the estimated Gaussian and the desired Gaussian distributions to calculate the quantitative measure. The experimental results show the effectiveness and the reliability of the proposed illumination enhancement technique, as well as the proposed illumination assessment measure over conventional and state-of-the-art techniques.

  11. Stochastic search, optimization and regression with energy applications

    Science.gov (United States)

    Hannah, Lauren A.

    models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  12. Use of linear regression for the processing of curves of differential potentiometric titration of a binary mixture of heterovalent ions using precipitation reactions

    International Nuclear Information System (INIS)

    Mar'yanov, B.M.; Zarubin, A.G.; Shumar, S.V.

    2003-01-01

    A method is proposed for the computer processing of curve of differential potentiometric titration of a binary mixture of heterovalent ions using precipitation reactions. The method is based on the transformation of the titration curve to segment-line characteristics, whose parameters (within the accuracy of the least-squares method) determine the sequence of the equivalence points and solubility products of the resulting precipitation. The method is applied to the titration of Ag(I)-Cd)II), Hg(II)-Te(IV), and Cd(II)-Te(IV) mixtures by a sodium diethyldithiocarbamate solution with membrane sulfide and glassy carbon indicator electrodes. For 4 to 11 mg of the analyte in 50 ml of the solution, RSD varies from 1 to 9% [ru

  13. Searching for non-Gaussianity in the WMAP data

    International Nuclear Information System (INIS)

    Bernui, A.; Reboucas, M. J.

    2009-01-01

    Some analyses of recent cosmic microwave background (CMB) data have provided hints that there are deviations from Gaussianity in the WMAP CMB temperature fluctuations. Given the far-reaching consequences of such a non-Gaussianity for our understanding of the physics of the early universe, it is important to employ alternative indicators in order to determine whether the reported non-Gaussianity is of cosmological origin, and/or extract further information that may be helpful for identifying its causes. We propose two new non-Gaussianity indicators, based on skewness and kurtosis of large-angle patches of CMB maps, which provide a measure of departure from Gaussianity on large angular scales. A distinctive feature of these indicators is that they provide sky maps of non-Gaussianity of the CMB temperature data, thus allowing a possible additional window into their origins. Using these indicators, we find no significant deviation from Gaussianity in the three and five-year WMAP Internal Linear Combination (ILC) map with KQ75 mask, while the ILC unmasked map exhibits deviation from Gaussianity, quantifying therefore the WMAP team recommendation to employ the new mask KQ75 for tests of Gaussianity. We also use our indicators to test for Gaussianity the single frequency foreground unremoved WMAP three and five-year maps, and show that the K and Ka maps exhibit a clear indication of deviation from Gaussianity even with the KQ75 mask. We show that our findings are robust with respect to the details of the method.

  14. Transient Properties of a Bistable System with Delay Time Driven by Non-Gaussian and Gaussian Noises: Mean First-Passage Time

    International Nuclear Information System (INIS)

    Li Dongxi; Xu Wei; Guo Yongfeng; Li Gaojie

    2008-01-01

    The mean first-passage time of a bistable system with time-delayed feedback driven by multiplicative non-Gaussian noise and additive Gaussian white noise is investigated. Firstly, the non-Markov process is reduced to the Markov process through a path-integral approach; Secondly, the approximate Fokker-Planck equation is obtained by applying the unified coloured noise approximation, the small time delay approximation and the Novikov Theorem. The functional analysis and simplification are employed to obtain the approximate expressions of MFPT. The effects of non-Gaussian parameter (measures deviation from Gaussian character) r, the delay time τ, the noise correlation time τ 0 , the intensities D and α of noise on the MFPT are discussed. It is found that the escape time could be reduced by increasing the delay time τ, the noise correlation time τ 0 , or by reducing the intensities D and α. As far as we know, this is the first time to consider the effect of delay time on the mean first-passage time in the stochastic dynamical system

  15. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  16. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  17. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Science.gov (United States)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  18. Better Autologistic Regression

    Directory of Open Access Journals (Sweden)

    Mark A. Wolters

    2017-11-01

    Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

  19. Regression in organizational leadership.

    Science.gov (United States)

    Kernberg, O F

    1979-02-01

    The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

  20. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.