WorldWideScience

Sample records for generalized likelihood uncertainty

  1. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...

  2. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  3. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  4. Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials

    CERN Document Server

    Bartroff, Jay

    2011-01-01

    A new approach to adaptive design of clinical trials is proposed in a general multiparameter exponential family setting, based on generalized likelihood ratio statistics and optimal sequential testing theory. These designs are easy to implement, maintain the prescribed Type I error probability, and are asymptotically efficient. Practical issues involved in clinical trials allowing mid-course adaptation and the large literature on this subject are discussed, and comparisons between the proposed and existing designs are presented in extensive simulation studies of their finite-sample performance, measured in terms of the expected sample size and power functions.

  5. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    Science.gov (United States)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  6. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  7. Generalized uncertainty principles

    CERN Document Server

    Machluf, Ronny

    2008-01-01

    The phenomenon in the essence of classical uncertainty principles is well known since the thirties of the last century. We introduce a new phenomenon which is in the essence of a new notion that we introduce: "Generalized Uncertainty Principles". We show the relation between classical uncertainty principles and generalized uncertainty principles. We generalized "Landau-Pollak-Slepian" uncertainty principle. Our generalization relates the following two quantities and two scaling parameters: 1) The weighted time spreading $\\int_{-\\infty}^\\infty |f(x)|^2w_1(x)dx$, ($w_1(x)$ is a non-negative function). 2) The weighted frequency spreading $\\int_{-\\infty}^\\infty |\\hat{f}(\\omega)|^2w_2(\\omega)d\\omega$. 3) The time weight scale $a$, ${w_1}_a(x)=w_1(xa^{-1})$ and 4) The frequency weight scale $b$, ${w_2}_b(\\omega)=w_2(\\omega b^{-1})$. "Generalized Uncertainty Principle" is an inequality that summarizes the constraints on the relations between the two spreading quantities and two scaling parameters. For any two reason...

  8. The effects of ionic strength and organic matter on virus inactivation at low temperatures: general likelihood uncertainty estimation (GLUE) as an alternative to least-squares parameter optimization for the fitting of virus inactivation models

    Science.gov (United States)

    Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin

    2017-06-01

    This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.

  9. Generalized uncertainty relations

    Science.gov (United States)

    Herdegen, Andrzej; Ziobro, Piotr

    2017-04-01

    The standard uncertainty relations (UR) in quantum mechanics are typically used for unbounded operators (like the canonical pair). This implies the need for the control of the domain problems. On the other hand, the use of (possibly bounded) functions of basic observables usually leads to more complex and less readily interpretable relations. In addition, UR may turn trivial for certain states if the commutator of observables is not proportional to a positive operator. In this letter we consider a generalization of standard UR resulting from the use of two, instead of one, vector states. The possibility to link these states to each other in various ways adds additional flexibility to UR, which may compensate some of the above-mentioned drawbacks. We discuss applications of the general scheme, leading not only to technical improvements, but also to interesting new insight.

  10. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  11. Sieve likelihood ratio inference on general parameter space

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiaotong; SHI Jian

    2005-01-01

    In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.

  12. Review on Generalized Uncertainty Principle

    CERN Document Server

    Tawfik, Abdel Nasser

    2015-01-01

    Based on string theory, black hole physics, doubly special relativity and some "thought" experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in understanding recent PLANCK observations on the cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta.

  13. Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods

    Science.gov (United States)

    Marchant, Ben; Mackay, Jonathan; Bloomfield, John

    2016-09-01

    Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the

  14. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are, respectiv......We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  15. Conditional likelihood inference in generalized linear mixed models.

    OpenAIRE

    Sartori, Nicola; Severini , T.A

    2002-01-01

    Consider a generalized linear model with a canonical link function, containing both fixed and random effects. In this paper, we consider inference about the fixed effects based on a conditional likelihood function. It is shown that this conditional likelihood function is valid for any distribution of the random effects and, hence, the resulting inferences about the fixed effects are insensitive to misspecification of the random effects distribution. Inferences based on the conditional likelih...

  16. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  17. Penalized maximum likelihood estimation for generalized linear point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....

  18. Penalized maximum likelihood estimation for generalized linear point processes

    OpenAIRE

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...

  19. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  20. Adaptive quasi-likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    CHEN Xia; CHEN Xiru

    2005-01-01

    This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.

  1. Generalized Uncertainty Principle and Angular Momentum

    CERN Document Server

    Bosso, Pasquale

    2016-01-01

    Various models of quantum gravity suggest a modification of the Heisenberg's Uncertainty Principle, to the so-called Generalized Uncertainty Principle, between position and momentum. In this work we show how this modification influences the theory of angular momentum in Quantum Mechanics. In particular, we compute Planck scale corrections to angular momentum eigenvalues, the Hydrogen atom spectrum, the Stern-Gerlach experiment and the Clebsch-Gordan coefficients. We also examine effects of the Generalized Uncertainty Principle on multi-particle systems.

  2. Sparse-posterior Gaussian Processes for general likelihoods

    CERN Document Server

    Yuan,; Abdel-Gawad, Ahmed H; Minka, Thomas P

    2012-01-01

    Gaussian processes (GPs) provide a probabilistic nonparametric representation of functions in regression, classification, and other problems. Unfortunately, exact learning with GPs is intractable for large datasets. A variety of approximate GP methods have been proposed that essentially map the large dataset into a small set of basis points. Among them, two state-of-the-art methods are sparse pseudo-input Gaussian process (SPGP) (Snelson and Ghahramani, 2006) and variablesigma GP (VSGP) Walder et al. (2008), which generalizes SPGP and allows each basis point to have its own length scale. However, VSGP was only derived for regression. In this paper, we propose a new sparse GP framework that uses expectation propagation to directly approximate general GP likelihoods using a sparse and smooth basis. It includes both SPGP and VSGP for regression as special cases. Plus as an EP algorithm, it inherits the ability to process data online. As a particular choice of approximating family, we blur each basis point with a...

  3. Uncertainty relations for general unitary operators

    Science.gov (United States)

    Bagchi, Shrobona; Pati, Arun Kumar

    2016-10-01

    We derive several uncertainty relations for two arbitrary unitary operators acting on physical states of a Hilbert space. We show that our bounds are tighter in various cases than the ones existing in the current literature. Using the uncertainty relation for the unitary operators, we obtain the tight state-independent lower bound for the uncertainty of two Pauli observables and anticommuting observables in higher dimensions. With regard to the minimum-uncertainty states, we derive the minimum-uncertainty state equation by the analytic method and relate this to the ground-state problem of the Harper Hamiltonian. Furthermore, the higher-dimensional limit of the uncertainty relations and minimum-uncertainty states are explored. From an operational point of view, we show that the uncertainty in the unitary operator is directly related to the visibility of quantum interference in an interferometer where one arm of the interferometer is affected by a unitary operator. This shows a principle of preparation uncertainty, i.e., for any quantum system, the amount of visibility for two general noncommuting unitary operators is nontrivially upper bounded.

  4. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  5. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    Science.gov (United States)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7

  6. Gravitational tests of the generalized uncertainty principle

    Energy Technology Data Exchange (ETDEWEB)

    Scardigli, Fabio [American University of the Middle East, Department of Mathematics, College of Engineering, P.O. Box 220, Dasman (Kuwait); Politecnico di Milano, Dipartimento di Matematica, Milan (Italy); Casadio, Roberto [Alma Mater Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (Italy); INFN, Sezione di Bologna, Bologna (Italy)

    2015-09-15

    We compute the corrections to the Schwarzschild metric necessary to reproduce the Hawking temperature derived from a generalized uncertainty principle (GUP), so that the GUP deformation parameter is directly linked to the deformation of the metric. Using this modified Schwarzschild metric, we compute corrections to the standard general relativistic predictions for the light deflection and perihelion precession, both for planets in the solar system and for binary pulsars. This analysis allows us to set bounds for the GUP deformation parameter from well-known astronomical measurements. (orig.)

  7. A review of the generalized uncertainty principle.

    Science.gov (United States)

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-12-01

    Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.

  8. Generalized Uncertainty Principle: Approaches and Applications

    CERN Document Server

    Tawfik, Abdel Nasser

    2014-01-01

    We review some highlights from the String theory, the black hole physics and the doubly special relativity and some thought experiments which were suggested to probe the shortest distances and/or maximum momentum at the Planck scale. Furthermore, all models developed in order to implement the minimal length scale and/or the maximum momentum in different physical systems are analysed. We compare between them. They entered the literature as the Generalized Uncertainty Principle (GUP) assuming modified dispersion relation, and therefore are allowed for a wide range of Applications in estimating, for example, the inflationary parameters, Lorentz invariance violation, black hole thermodynamics, Saleker--Wigner inequalities, entropic nature of gravitational laws, Friedmann equations, minimal time measurement and thermodynamics of the high--energy collisions. One of the higher--order GUP approaches gives predictions for the minimal length uncertainty. A second one predicts a maximum momentum and a minimal length unc...

  9. Phenomenological Implications of the Generalized Uncertainty Principle

    CERN Document Server

    Das, Saurya

    2009-01-01

    Various theories of Quantum Gravity argue that near the Planck scale, the Heisenberg Uncertainty Principle should be replaced by the so called Generalized Uncertainty Principle (GUP). We show that the GUP gives rise to two additional terms in any quantum mechanical Hamiltonian, proportional to \\beta p^4 and \\beta^2 p^6 respectively, where \\beta \\sim 1/(M_{Pl}c)^2 is the GUP parameter. These terms become important at or above the Planck energy. Considering only the first of these, and treating it as a perturbation, we show that the GUP affects the Lamb shift, Landau levels, reflection and transmission coefficients of a potential step and potential barrier, and the current in a Scanning Tunnel Microscope (STM). Although these are too small to be measurable at present, we speculate on the possibility of extracting measurable predictions in the future.

  10. A likelihood-based method for haplotype association studies of case-control data with genotyping uncertainty

    Institute of Scientific and Technical Information of China (English)

    ZHU; Wensheng; GUO; Jianhua

    2006-01-01

    This paper discusses the associations between traits and haplotypes based on Fl (fluorescent intensity) data sets, We consider a clustering algorithm based on mixtures of t distributions to obtain all possible genotypes of each individual (i.e. "GenoSpectrum"). We then propose a likelihood-based approach that incorporates the genotyping uncertainty to assessing the associations between traits and haplotypes through a haplotypebased logistic regression model, Simulation studies show that our likelihood-based method can reduce the impact induced by genotyping errors.

  11. Generalized Correlation Coefficient Based on Log Likelihood Ratio Test Statistic

    Directory of Open Access Journals (Sweden)

    Liu Hsiang-Chuan

    2016-01-01

    Full Text Available In this paper, I point out that both Joe’s and Ding’s strength statistics can only be used for testing the pair-wise independence, and I propose a novel G-square based strength statistic, called Liu’s generalized correlation coefficient, it can be used to detect and compare the strength of not only the pair-wise independence but also the mutual independence of any multivariate variables. Furthermore, I proved that only Liu’s generalized correlation coefficient is strictly increasing on its number of variables, it is more sensitive and useful than Cramer’s V coefficient, in other words, Liu generalized correlation coefficient is not only the G-square based strength statistic, but also an improved statistic for detecting and comparing the strengths of deferent associations of any two or more sets of multivariate variables, moreover, this new strength statistic can also be tested by G2.

  12. Linear Programming Problems for Generalized Uncertainty

    Science.gov (United States)

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  13. Uncertainty quantification for generalized Langevin dynamics

    Science.gov (United States)

    Hall, Eric J.; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-12-01

    We present efficient finite difference estimators for goal-oriented sensitivity indices with applications to the generalized Langevin equation (GLE). In particular, we apply these estimators to analyze an extended variable formulation of the GLE where other well known sensitivity analysis techniques such as the likelihood ratio method are not applicable to key parameters of interest. These easily implemented estimators are formed by coupling the nominal and perturbed dynamics appearing in the finite difference through a common driving noise or common random path. After developing a general framework for variance reduction via coupling, we demonstrate the optimality of the common random path coupling in the sense that it produces a minimal variance surrogate for the difference estimator relative to sampling dynamics driven by independent paths. In order to build intuition for the common random path coupling, we evaluate the efficiency of the proposed estimators for a comprehensive set of examples of interest in particle dynamics. These reduced variance difference estimators are also a useful tool for performing global sensitivity analysis and for investigating non-local perturbations of parameters, such as increasing the number of Prony modes active in an extended variable GLE.

  14. Lorentz invariance violation and generalized uncertainty principle

    Science.gov (United States)

    Tawfik, Abdel Nasser; Magdy, H.; Ali, A. Farag

    2016-01-01

    There are several theoretical indications that the quantum gravity approaches may have predictions for a minimal measurable length, and a maximal observable momentum and throughout a generalization for Heisenberg uncertainty principle. The generalized uncertainty principle (GUP) is based on a momentum-dependent modification in the standard dispersion relation which is conjectured to violate the principle of Lorentz invariance. From the resulting Hamiltonian, the velocity and time of flight of relativistic distant particles at Planck energy can be derived. A first comparison is made with recent observations for Hubble parameter in redshift-dependence in early-type galaxies. We find that LIV has two types of contributions to the time of flight delay Δ t comparable with that observations. Although the wrong OPERA measurement on faster-than-light muon neutrino anomaly, Δ t, and the relative change in the speed of muon neutrino Δ v in dependence on redshift z turn to be wrong, we utilize its main features to estimate Δ v. Accordingly, the results could not be interpreted as LIV. A third comparison is made with the ultra high-energy cosmic rays (UHECR). It is found that an essential ingredient of the approach combining string theory, loop quantum gravity, black hole physics and doubly spacial relativity and the one assuming a perturbative departure from exact Lorentz invariance. Fixing the sensitivity factor and its energy dependence are essential inputs for a reliable confronting of our calculations to UHECR. The sensitivity factor is related to the special time of flight delay and the time structure of the signal. Furthermore, the upper and lower bounds to the parameter, a that characterizes the generalized uncertainly principle, have to be fixed in related physical systems such as the gamma rays bursts.

  15. Fast inference in generalized linear models via expected log-likelihoods.

    Science.gov (United States)

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  16. Generalized uncertainty principle and black hole thermodynamics

    CERN Document Server

    Gangopadhyay, Sunandan; Saha, Anirban

    2013-01-01

    We study the Schwarzschild and Reissner-Nordstr\\"{o}m black hole thermodynamics using the simplest form of the generalized uncertainty principle (GUP) proposed in the literature. The expressions for the mass-temperature relation, heat capacity and entropy are obtained in both cases from which the critical and remnant masses are computed. Our results are exact and reveal that these masses are identical and larger than the so called singular mass for which the thermodynamics quantities become ill-defined. The expression for the entropy reveals the well known area theorem in terms of the horizon area in both cases upto leading order corrections from GUP. The area theorem written in terms of a new variable which can be interpreted as the reduced horizon area arises only when the computation is carried out to the next higher order correction from GUP.

  17. Lorentz Invariance Violation and Generalized Uncertainty Principle

    CERN Document Server

    Tawfik, A; Ali, A Farag

    2016-01-01

    Recent approaches for quantum gravity are conjectured to give predictions for a minimum measurable length, a maximum observable momentum and an essential generalization for the Heisenberg uncertainty principle (GUP). The latter is based on a momentum-dependent modification in the standard dispersion relation and leads to Lorentz invariance violation (LIV). The main features of the controversial OPERA measurements on the faster-than-light muon neutrino anomaly are used to calculate the time of flight delays $\\Delta t$ and the relative change $\\Delta v$ in the speed of neutrino in dependence on the redshift $z$. The results are compared with the OPERA measurements. We find that the measurements are too large to be interpreted as LIV. Depending on the rest mass, the propagation of high-energy muon neutrino can be superluminal. The comparison with the ultra high energy cosmic rays seems to reveals an essential ingredient of the approach combining string theory, loop quantum gravity, black hole physics and doubly ...

  18. Flexible and generalized uncertainty optimization theory and methods

    CERN Document Server

    Lodwick, Weldon A

    2017-01-01

    This book presents the theory and methods of flexible and generalized uncertainty optimization. Particularly, it describes the theory of generalized uncertainty in the context of optimization modeling. The book starts with an overview of flexible and generalized uncertainty optimization. It covers uncertainties that are both associated with lack of information and that more general than stochastic theory, where well-defined distributions are assumed. Starting from families of distributions that are enclosed by upper and lower functions, the book presents construction methods for obtaining flexible and generalized uncertainty input data that can be used in a flexible and generalized uncertainty optimization model. It then describes the development of such a model in detail. All in all, the book provides the readers with the necessary background to understand flexible and generalized uncertainty optimization and develop their own optimization model. .

  19. Generalizing Terwilliger's likelihood approach: a new score statistic to test for genetic association

    OpenAIRE

    Hsu Li; Helmer Quinta; de Visser Marieke CH; Uitte de Willige Shirley; el Galta Rachid; Houwing-Duistermaat Jeanine J

    2007-01-01

    Abstract Background: In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. Results: By means of a simulation study...

  20. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  1. Application of a generalized likelihood function for parameter inference of a carbon balance model using multiple, joint constraints

    Science.gov (United States)

    Hammerle, Albin; Wohlfahrt, Georg; Schoups, Gerrit

    2014-05-01

    Advances in automated data collection systems enabled ecologists to collect enormous amounts of varied data. Data assimilation (or data model synthesis) is one way to make sense of this mass of data. Given a process model designed to learn about ecological processes these data can be integrated within a statistical framework for data interpretation and extrapolation. Results of such a data assimilation framework clearly depend on the information content of the observed data, on the associated uncertainties (data uncertainties, model structural uncertainties and parameter uncertainties) and underlying assumptions. Parameter estimation is usually done by minimizing a simple least squares objective function with respect to the model parameters - presuming Gaussian, independent and homoscedastic errors (formal approach). Recent contributions to the (ecological) literature, however, have questioned the validity of this approach when confronted with significant errors and uncertainty in the model forcing (inputs) and model structure. Very often residual errors are non-Gaussian, correlated and heteroscedastic. Thus these error sources have to be considered and residual-errors have to be described in a statistically correct fashion order to draw statistically sound conclusions about parameter- and model predictive-uncertainties. We examined the effects of a generalized likelihood (GL) function on the parameter estimation of a carbon balance model. Compared with the formal approach, the GL function allows for correlation, non-stationarity and non-normality of model residuals. Carbon model parameters have been constrained using three different datasets, each of them modelled by its own GL function. As shown in literature the use of different datasets for parameter estimation reduces the uncertainty in model parameters and model predictions and does allow for a better quantification and for more insights into model processes.

  2. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  3. Incorporation of generalized uncertainty principle into Lifshitz field theories

    Energy Technology Data Exchange (ETDEWEB)

    Faizal, Mir, E-mail: f2mir@uwaterloo.ca [Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Majumder, Barun, E-mail: barunbasanta@iitgn.ac.in [Indian Institute of Technology Gandhinagar, Ahmedabad, 382424 (India)

    2015-06-15

    In this paper, we will incorporate the generalized uncertainty principle into field theories with Lifshitz scaling. We will first construct both bosonic and fermionic theories with Lifshitz scaling based on generalized uncertainty principle. After that we will incorporate the generalized uncertainty principle into a non-abelian gauge theory with Lifshitz scaling. We will observe that even though the action for this theory is non-local, it is invariant under local gauge transformations. We will also perform the stochastic quantization of this Lifshitz fermionic theory based generalized uncertainty principle.

  4. Asymptotic normality and strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YIN; Changming; ZHAO; Lincheng; WEI; Chengdong

    2006-01-01

    In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.

  5. Asymptotic Properties of the Maximum Likelihood Estimate in Generalized Linear Models with Stochastic Regressors

    Institute of Scientific and Technical Information of China (English)

    Jie Li DING; Xi Ru CHEN

    2006-01-01

    For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.

  6. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  7. ASYMPTOTIC NORMALITY OF MAXIMUM QUASI-LIKELIHOOD ESTIMATORS IN GENERALIZED LINEAR MODELS WITH FIXED DESIGN

    Institute of Scientific and Technical Information of China (English)

    Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG

    2008-01-01

    In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.

  8. A Likelihood-Based SLIC Superpixel Algorithm for SAR Images Using Generalized Gamma Distribution

    Directory of Open Access Journals (Sweden)

    Huanxin Zou

    2016-07-01

    Full Text Available The simple linear iterative clustering (SLIC method is a recently proposed popular superpixel algorithm. However, this method may generate bad superpixels for synthetic aperture radar (SAR images due to effects of speckle and the large dynamic range of pixel intensity. In this paper, an improved SLIC algorithm for SAR images is proposed. This algorithm exploits the likelihood information of SAR image pixel clusters. Specifically, a local clustering scheme combining intensity similarity with spatial proximity is proposed. Additionally, for post-processing, a local edge-evolving scheme that combines spatial context and likelihood information is introduced as an alternative to the connected components algorithm. To estimate the likelihood information of SAR image clusters, we incorporated a generalized gamma distribution (GГD. Finally, the superiority of the proposed algorithm was validated using both simulated and real-world SAR images.

  9. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  10. Some Implications of Two Forms of the Generalized Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    Mohammed M. Khalil

    2014-01-01

    Full Text Available Various theories of quantum gravity predict the existence of a minimum length scale, which leads to the modification of the standard uncertainty principle to the Generalized Uncertainty Principle (GUP. In this paper, we study two forms of the GUP and calculate their implications on the energy of the harmonic oscillator and the hydrogen atom more accurately than previous studies. In addition, we show how the GUP modifies the Lorentz force law and the time-energy uncertainty principle.

  11. ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    YUE LI; CHEN XIRU

    2005-01-01

    For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.

  12. Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Gao Rong LI; Ping TIAN; Liu Gen XUE

    2008-01-01

    In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.

  13. Robust stabilization of general nonlinear systems with structural uncertainty

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper deals with the robust stabilization and passivity of general nonlinear systems with structural uncertainty. By using Lyapunov function, it verifies that under some conditions the robust passivity implies the zero-state detectability, Furthermore, it also implies the robust stabilization for such nonlinear systems. We then establish a stabilization method for the nonlinear systems with structural uncertainty. The smooth state feedback law can be constructed with the solution of an equation. Finally, it is worth noting that the main contribution of the paper establishes the relation between robust passivity and feedback stabilization for the general nonlinear systems with structural uncertainty. The simulation shows the effectiveness of the method.

  14. A maximum likelihood model for fitting power functions with data uncertainty: A case study on the relationship between body lengths and masses for Sciuridae species worldwide

    Directory of Open Access Journals (Sweden)

    Youhua Chen

    2016-09-01

    Full Text Available In this report, a maximum likelihood model is developed to incorporate data uncertainty in response and explanatory variables when fitting power-law bivariate relationships in ecology and evolution. This simple likelihood model is applied to an empirical data set related to the allometric relationship between body mass and length of Sciuridae species worldwide. The results show that the values of parameters estimated by the proposed likelihood model are substantially different from those fitted by the nonlinear least-of-square (NLOS method. Accordingly, the power-law models fitted by both methods have different curvilinear shapes. These discrepancies are caused by the integration of measurement errors in the proposed likelihood model, in which NLOS method fails to do. Because the current likelihood model and the NLOS method can show different results, the inclusion of measurement errors may offer new insights into the interpretation of scaling or power laws in ecology and evolution.

  15. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YUE Li; CHEN Xiru

    2004-01-01

    Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.

  16. Generalized Uncertainty Principle: Implications for Black Hole Complementarity

    CERN Document Server

    Chen, Pisin; Yeom, Dong-han

    2014-01-01

    At the heart of the black hole information loss paradox and the firewall controversy lies the conflict between quantum mechanics and general relativity. Much has been said about quantum corrections to general relativity, but much less in the opposite direction. It is therefore crucial to examine possible corrections to quantum mechanics due to gravity. Indeed, the Heisenberg Uncertainty Principle is one profound feature of quantum mechanics, which nevertheless may receive correction when gravitational effects become important. Such generalized uncertainty principle [GUP] has been motivated from not only quite general considerations of quantum mechanics and gravity, but also string theoretic arguments. We examine the role of GUP in the context of black hole complementarity. We find that while complementarity can be violated by large N rescaling if one assumes only the Heisenberg's Uncertainty Principle, the application of GUP may save complementarity, but only if certain N-dependence is also assumed. This rais...

  17. Correlation structure and variable selection in generalized estimating equations via composite likelihood information criteria.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2016-06-30

    The method of generalized estimating equations (GEE) is popular in the biostatistics literature for analyzing longitudinal binary and count data. It assumes a generalized linear model for the outcome variable, and a working correlation among repeated measurements. In this paper, we introduce a viable competitor: the weighted scores method for generalized linear model margins. We weight the univariate score equations using a working discretized multivariate normal model that is a proper multivariate model. Because the weighted scores method is a parametric method based on likelihood, we propose composite likelihood information criteria as an intermediate step for model selection. The same criteria can be used for both correlation structure and variable selection. Simulations studies and the application example show that our method outperforms other existing model selection methods in GEE. From the example, it can be seen that our methods not only improve on GEE in terms of interpretability and efficiency but also can change the inferential conclusions with respect to GEE. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  19. Generalized uncertainty principles, effective Newton constant and regular black holes

    CERN Document Server

    Li, Xiang; Shen, You-Gen; Liu, Cheng-Zhou; He, Hong-Sheng; Xu, Lan-Fang

    2016-01-01

    In this paper, we explore the quantum spacetimes that are potentially connected with the generalized uncertainty principles. By analyzing the gravity-induced quantum interference pattern and the Gedanken for weighting photon, we find that the generalized uncertainty principles inspire the effective Newton constant as same as our previous proposal. A characteristic momentum associated with the tidal effect is suggested, which incorporates the quantum effect with the geometric nature of gravity. When the simplest generalized uncertainty principle is considered, the minimal model of the regular black holes is reproduced by the effective Newton constant. The black hole's tunneling probability, accurate to the second order correction, is carefully analyzed. We find that the tunneling probability is regularized by the size of the black hole remnant. Moreover, the black hole remnant is the final state of a tunneling process that the probability is minimized. A theory of modified gravity is suggested, by substituting...

  20. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.

  1. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    ZHANG SanGuo; LIAO Yuan

    2008-01-01

    In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.

  2. Incorporating Astrophysical Systematics into a Generalized Likelihood for Cosmology with Type Ia Supernovae

    Science.gov (United States)

    Ponder, Kara A.; Wood-Vasey, W. Michael; Zentner, Andrew R.

    2016-07-01

    Traditional cosmological inference using Type Ia supernovae (SNe Ia) have used stretch- and color-corrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion for the resulting relative luminosity. As systematics become the main contributors to the error budget, it has become imperative to expand supernova cosmology analyses to include a more general likelihood to model systematics to remove biases with losses in precision. To illustrate an example likelihood analysis, we use a simple model of two populations with a relative luminosity shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. Treating observationally viable two-population mock data using a one-population model results in an inferred dark energy equation of state parameter w that is biased by roughly 2 times its statistical error for a sample of N\\quad ≳ \\quad 2500 SNe Ia. Modeling the two-population data with a two-population model removes this bias at a cost of an approximately ˜ 20 % increase in the statistical constraint on w. These significant biases can be realized even if the support for two underlying SNe Ia populations, in the form of model selection criteria, is inconclusive. With the current observationally estimated difference in the two proposed populations, a sample of N\\quad ≳ \\quad 10,000 SNe Ia is necessary to yield conclusive evidence of two populations.

  3. Bounds in the generalized Weber problem under locational uncertainty

    DEFF Research Database (Denmark)

    Juel, Henrik

    1981-01-01

    An existing analysis of the bounds on the Weber problem solution under uncertainty is incorrect. For the generalized problem with arbitrary measures of distance, we give easily computable ranges on the bounds and state the conditions under which the exact values of the bounds can be found...

  4. Pseudoharmonic oscillator in quantum mechanics with a generalized uncertainty principle

    CERN Document Server

    Boukhellout, Abdelmalek

    2013-01-01

    The pseudoharmonic oscillator potential is studied in quantum mechanics with a generalized uncertainty relation characterized by the existence of a minimal length. By using the perturbative approach of Brau, we compute the correction to the energy spectrum in the first order of the minimal length parameter {\\beta}. The effect of the minimal length on the vibration-rotation of diatomic molecules is discussed.

  5. Investigation of Free Particle Propagator with Generalized Uncertainty Problem

    CERN Document Server

    Ghobakhloo, F

    2016-01-01

    We consider the Schrodinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.

  6. Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test

    KAUST Repository

    Madakyaru, Muddu

    2017-02-16

    Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.

  7. Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes

    Institute of Scientific and Technical Information of China (English)

    Hong Chang HU; Lei SONG

    2014-01-01

    The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.

  8. Incorporating Astrophysical Systematics into a Generalized Likelihood for Cosmology with Type Ia Supernovae

    CERN Document Server

    Ponder, Kara A; Zentner, Andrew R

    2015-01-01

    Traditional cosmological inference using Type Ia supernovae (SNeIa) have used stretch- and color-corrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion to the resulting relative luminosity. However, the recent literature has presented mounting evidence that SNeIa have different width-color-corrected luminosities, depending on the environment in which they are found. Such correlations suggest the existence of multiple populations of SNeIa and a non-Gaussian distribution of relative luminosity. We introduce a framework that provides a generalized full-likelihood approach to accommodate multiple populations with unknown population parameters. To illustrate this framework we use a simple model of two populations with a relative shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. We generate mock SN Ia data sets from an underlying two-population model and use a Markov Chain Monte Carlo algorithm ...

  9. Constraining the generalized uncertainty principle with gravitational wave

    CERN Document Server

    Feng, Zhong-Wen; Li, Hui-Ling; Zu, Xiao-Tao

    2016-01-01

    Various theories of quantum gravity suggest a modification in the Heisenberg uncertainty principle to a so-called generalized uncertainty principle (GUP), which produces significant modifications to different physical systems. For this reason, in this paper, we investigate the speed of graviton by utilizing two proposals for the GUP. Then, to comply event GW 150914 data, we set upper bounds on the GUP parameters. It is found that the upper limit of the GUP parameters $\\beta_0$ and $\\alpha_0$ are $1.44675 \\times10^{10}$ and $1.35934 \\times10^{-4}$.

  10. Generalized Uncertainty Principle and Analogue of Quantum Gravity in Optics

    CERN Document Server

    Braidotti, Maria Chiara; Conti, Claudio

    2016-01-01

    The design of optical systems capable of processing and manipulating ultra-short pulses and ultra-focused beams is highly challenging with far reaching fundamental technological applications. One key obstacle routinely encountered while implementing sub-wavelength optical schemes is how to overcome the limitations set by standard Fourier optics. A strategy to overcome these difficulties is to utilize the concept of generalized uncertainty principle (G-UP) that has been originally developed to study quantum gravity. In this paper we propose to use the concept of G-UP within the framework of optics to show that the generalized Schrodinger equation describing short pulses and ultra-focused beams predicts the existence of a minimal spatial or temporal scale which in turn implies the existence of maximally localized states. Using a Gaussian wavepacket with complex phase, we derive the corresponding generalized uncertainty relation and its maximally localized states. We numerically show that the presence of nonlin...

  11. Dealing with uncertainty in general practice: an essential skill for the general practitioner

    NARCIS (Netherlands)

    O'Riordan, M.; Dahinden, A.; Akturk, Z.; Ortiz, J.M.; Dagdeviren, N.; Elwyn, G.; Micallef, A.; Murtonen, M.; Samuelson, M.; Struk, P.; Tayar, D.; Thesen, J.

    2011-01-01

    Many patients attending general practice do not have an obvious diagnosis at presentation. Skills to deal with uncertainty are particularly important in general practice as undifferentiated and unorganised problems are a common challenge for general practitioners (GPs). This paper describes the mana

  12. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  13. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    Science.gov (United States)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  14. THE GENERALIZED MAXIMUM LIKELIHOOD METHOD APPLIED TO HIGH PRESSURE PHASE EQUILIBRIUM

    Directory of Open Access Journals (Sweden)

    Lúcio CARDOZO-FILHO

    1997-12-01

    Full Text Available The generalized maximum likelihood method was used to determine binary interaction parameters between carbon dioxide and components of orange essential oil. Vapor-liquid equilibrium was modeled with Peng-Robinson and Soave-Redlich-Kwong equations, using a methodology proposed in 1979 by Asselineau, Bogdanic and Vidal. Experimental vapor-liquid equilibrium data on binary mixtures formed with carbon dioxide and compounds usually found in orange essential oil were used to test the model. These systems were chosen to demonstrate that the maximum likelihood method produces binary interaction parameters for cubic equations of state capable of satisfactorily describing phase equilibrium, even for a binary such as ethanol/CO2. Results corroborate that the Peng-Robinson, as well as the Soave-Redlich-Kwong, equation can be used to describe phase equilibrium for the following systems: components of essential oil of orange/CO2.Foi empregado o método da máxima verossimilhança generalizado para determinação de parâmetros de interação binária entre os componentes do óleo essencial de laranja e dióxido de carbono. Foram usados dados experimentais de equilíbrio líquido-vapor de misturas binárias de dióxido de carbono e componentes do óleo essencial de laranja. O equilíbrio líquido-vapor foi modelado com as equações de Peng-Robinson e de Soave-Redlich-Kwong usando a metodologia proposta em 1979 por Asselineau, Bogdanic e Vidal. A escolha destes sistemas teve como objetivo demonstrar que o método da máxima verosimilhança produz parâmetros de interação binária, para equações cúbicas de estado capazes de descrever satisfatoriamente até mesmo o equilíbrio para o binário etanol/CO2. Os resultados comprovam que tanto a equação de Peng-Robinson quanto a de Soave-Redlich-Kwong podem ser empregadas para descrever o equilíbrio de fases para o sistemas: componentes do óleo essencial de laranja/CO2.

  15. A Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    Science.gov (United States)

    Sarachi, S.

    2013-12-01

    A mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based and stage IV radar rainfall under a given spatial and temporal resolution (e.g. 1°x1° and daily rainfall). The distribution parameters of GND-G are extended across various rainfall rates and spatial and temporal resolutions. In the study, GND-G is used to describe the uncertainty of the estimates from Precipitation Estimation from Remote Sensing Information using Artificial Neural Network algorithm (PERSIANN). The stage IV-based multi-sensor precipitation estimates (MPE) are used as reference measurements .The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. Result shows that comparing to the other statistical uncertainty models, GND-G fits better than the other models, such as Gaussian and Gamma distributions, to the reference precipitation data. The impact of precipitation uncertainty to the stream flow is further demonstrated by Monte Carlo simulation of precipitation forcing in the hydrologic model. The NWS DMIP2 basins over Illinois River basin south of Siloam is selected in this case study. The data covers the time period of 2006 to 2008.The uncertainty range of stream flow from precipitation of GND-G distributions calculated and will be discussed.

  16. Dealing with uncertainty in general practice: an essential skill for the general practitioner.

    Science.gov (United States)

    O'Riordan, Margaret; Dahinden, André; Aktürk, Zekeriya; Ortiz, José Miguel Bueno; Dağdeviren, Nezih; Elwyn, Glyn; Micallef, Adrian; Murtonen, Mikko; Samuelson, Marianne; Struk, Per; Tayar, Danny; Thesen, Janecke

    2011-01-01

    Many patients attending general practice do not have an obvious diagnosis at presentation. Skills to deal with uncertainty are particularly important in general practice as undifferentiated and unorganised problems are a common challenge for general practitioners (GPs). This paper describes the management of uncertainty as an essential skill which should be included in educational programmes for both trainee and established GPs. Philosophers, psychologists and sociologists use different approaches to the conceptualisation of managing uncertainty. The literature on dealing with uncertainty focuses largely on identifying relevant evidence and decision making. Existing models of the consultation should be improved in order to understand consultations involving uncertainty. An alternative approach focusing on shared decision making and understanding the consultation from the patient's perspective is suggested. A good doctor-patient relationship is vital, creating trust and mutual respect, developed over time with good communication skills. Evidence-based medicine should be used, including discussion of probabilities where available. Trainers need to be aware of their own use of heuristics as they act as role models for trainees. Expression of feelings by trainees should be encouraged and acknowledged by trainers as a useful tool in dealing with uncertainty. Skills to deal with uncertainty should be regarded as quality improvement tools and included in educational programmes involving both trainee and established GPs.

  17. Constraining the generalized uncertainty principle with cold atoms

    CERN Document Server

    Gao, Dongfeng

    2016-01-01

    Various theories of quantum gravity predict the existence of a minimum length scale, which implies the Planck-scale modifications of the Heisenberg uncertainty principle to a so-called generalized uncertainty principle (GUP). Previous studies of the GUP focused on its implications for high-energy physics, cosmology, and astrophysics. Here, the application of the GUP to low-energy quantum systems, and particularly cold atoms, is studied. Results from the $^{87}$Rb atom recoil experiment are used to set upper bounds on parameters in three different GUP proposals. A $10^{14}$-level bound on the Ali-Das-Vagenas proposal is found, which is the second best bound so far. A $10^{26}$-level bound on Maggiore's proposal is obtained, which turns out to be the best available bound on it.

  18. Generalized uncertainty principle and analogue of quantum gravity in optics

    Science.gov (United States)

    Braidotti, Maria Chiara; Musslimani, Ziad H.; Conti, Claudio

    2017-01-01

    The design of optical systems capable of processing and manipulating ultra-short pulses and ultra-focused beams is highly challenging with far reaching fundamental technological applications. One key obstacle routinely encountered while implementing sub-wavelength optical schemes is how to overcome the limitations set by standard Fourier optics. A strategy to overcome these difficulties is to utilize the concept of a generalized uncertainty principle (G-UP) which has been originally developed to study quantum gravity. In this paper we propose to use the concept of G-UP within the framework of optics to show that the generalized Schrödinger equation describing short pulses and ultra-focused beams predicts the existence of a minimal spatial or temporal scale which in turn implies the existence of maximally localized states. Using a Gaussian wavepacket with complex phase, we derive the corresponding generalized uncertainty relation and its maximally localized states. Furthermore, we numerically show that the presence of nonlinearity helps the system to reach its maximal localization. Our results may trigger further theoretical and experimental tests for practical applications and analogues of fundamental physical theories.

  19. Effects of the Generalized Uncertainty Principle on Compact Stars

    CERN Document Server

    Ali, Ahmed Farag

    2013-01-01

    Based on the generalized uncertainty principle (GUP), proposed by some approaches to quantum gravity such as string theory and doubly special relativity theories, we investigate the effect of GUP on the thermodynamic properties of compact stars with two different components. We note that the existence of quantum gravity correction tends to resist the collapse of stars if the GUP parameter $\\alpha$ is taking values between Planck scale and electroweak scale. Comparing with approaches, it is found that the radii of compact stars are found smaller. Increasing energy almost exponentially decreases the radii of compact stars.

  20. The Perihelion Precession of Mercury and the Generalized Uncertainty Principle

    CERN Document Server

    Majumder, Barun

    2011-01-01

    Very recently authors in [1] proposed a new Generalized Uncertainty Principle (or GUP) with a linear term in Plank length. In this Letter the effect of this linear term is studied perturbatively in the context of Keplerian orbits. The angle by which the perihelion of the orbit revolves over a complete orbital cycle is computed. The result is applied in the context of the precession of the perihelion of Mercury. As a consequence we get a lower bound of the new intermediate length scale offered by the GUP which is approximately 40 orders of magnitude below Plank length.

  1. Generalized Uncertainty Principle in the Presence of Extra Dimensions

    Institute of Scientific and Technical Information of China (English)

    MU Ben-Bong; WU Hou-Wen; YANG Hai-Tang

    2011-01-01

    @@ We argue that in the generalized uncertainty principle(GUP)model,the parameter β0 whose square root,minimal measurable length and extra dimensions are both suggested by quantum gravity theories,we investigate the models based on the GUP and one extra dimension,compactified with radius p.We obtain an inspiring quantum mechanics scale.We also estimate the application range of the GUP model.It turns out that the minimum measurable length is exactly the compactification radius of the extra dimension.%We argue that in the generalized uncertainty principle (GUP) model, the parameter 0o whose square root, multiplied by Planck length tv, approximates the minimum measurable distance, varies with energy scales. Since the minimal measurable length and extra dimensions are both suggested by quantum gravity theories, we investigate the models based on the GUP and one extra dimension, compactified with radius p. We obtain an inspiring relation βolp/p ~ 0(1). This relation is also consistent with the predictions at Planck scale and the usual quantum mechanics scale. We also estimate the application range of the GUP model. It turns out that the minimum measurable length is exactly the compactiScation radius of the extra dimension.

  2. Bounds on Large Extra Dimensions from the Generalized Uncertainty Principle

    CERN Document Server

    Cavaglia, Marco; Hou, Shaoqi

    2016-01-01

    The Generalized Uncertainty Principle (GUP) implies the existence of a physical minimum length scale $l_m$. In this scenario, black holes must have a radius larger than $l_m$. They are hotter and evaporate faster than in standard Hawking thermodynamics. We study the effects of the GUP on black hole production and decay at the LHC in models with large extra dimensions. Lower bounds on the fundamental Planck scale and the minimum black hole mass at formation are determined from black hole production cross section limits by the CMS Collaboration. The existence of a minimum length generally decreases the lower bounds on the fundamental Planck scale obtained in the absence of a minimum length.

  3. Generalized uncertainty principle and thermostatistics: a semiclassical approach

    CERN Document Server

    Abbasiyan-Motlaq, M

    2015-01-01

    We present an exact treatment of the thermodynamics of physical systems in the framework of the generalized uncertainty principle (GUP). Our purpose is to study and compare the consequences of two GUPs that one implies a minimal length while the other predicts a minimal length and a maximal momentum. Using a semiclassical method, we exactly calculate the modified internal energies and heat capacities in the presence of generalized commutation relations. We show that the total shift in these quantities only depends on the deformed algebra not on the system under study. Finally, the modified internal energy for an specific physical system such as ideal gas is obtained in the framework of two different GUPs.

  4. Multi-attribute mate choice decisions and uncertainty in the decision process: a generalized sequential search strategy.

    Science.gov (United States)

    Wiegmann, Daniel D; Weinersmith, Kelly L; Seubert, Steven M

    2010-04-01

    The behavior of females in search of a mate determines the likelihood that high quality males are encountered and adaptive search strategies rely on the effective use of available information on the quality of prospective mates. The sequential search strategy was formulated, like most models of search behavior, on the assumption that females obtain perfect information on the quality of encountered males. In this paper, we modify the strategy to allow for uncertainty of male quality and we determine how the magnitude of this uncertainty and the ability of females to inspect multiple male attributes to reduce uncertainty influence mate choice decisions. In general, searchers are sensitive to search costs and higher costs lower acceptance criteria under all versions of the model. The choosiness of searchers increases with the variability of the quality of prospective mates under conditions of the original model, but under conditions of uncertainty the choosiness of searchers may increase or decrease with the variability of inspected male attributes. The behavioral response depends on the functional relationship between observed male attributes and the fitness return to searchers and on costs associated with the search process. Higher uncertainty often induces searchers to pay more for information and under conditions of uncertainty the fitness return to searchers is never higher than under conditions of the original model. Further studies of the performance of alternative search strategies under conditions of uncertainty may consequently be necessary to identify search strategies likely to be used under natural conditions.

  5. A family-based likelihood ratio test for general pedigree structures that allows for genotyping error and missing data.

    Science.gov (United States)

    Yang, Yang; Wise, Carol A; Gordon, Derek; Finch, Stephen J

    2008-01-01

    The purpose of this work is the development of a family-based association test that allows for random genotyping errors and missing data and makes use of information on affected and unaffected pedigree members. We derive the conditional likelihood functions of the general nuclear family for the following scenarios: complete parental genotype data and no genotyping errors; only one genotyped parent and no genotyping errors; no parental genotype data and no genotyping errors; and no parental genotype data with genotyping errors. We find maximum likelihood estimates of the marker locus parameters, including the penetrances and population genotype frequencies under the null hypothesis that all penetrance values are equal and under the alternative hypothesis. We then compute the likelihood ratio test. We perform simulations to assess the adequacy of the central chi-square distribution approximation when the null hypothesis is true. We also perform simulations to compare the power of the TDT and this likelihood-based method. Finally, we apply our method to 23 SNPs genotyped in nuclear families from a recently published study of idiopathic scoliosis (IS). Our simulations suggest that this likelihood ratio test statistic follows a central chi-square distribution with 1 degree of freedom under the null hypothesis, even in the presence of missing data and genotyping errors. The power comparison shows that this likelihood ratio test is more powerful than the original TDT for the simulations considered. For the IS data, the marker rs7843033 shows the most significant evidence for our method (p = 0.0003), which is consistent with a previous report, which found rs7843033 to be the 2nd most significant TDTae p value among a set of 23 SNPs.

  6. Estimating parameters of generalized integrate-and-fire neurons from the maximum likelihood of spike trains.

    Science.gov (United States)

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2011-11-01

    When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.

  7. The most general form of deformation of the Heisenberg algebra from the generalized uncertainty principle

    Directory of Open Access Journals (Sweden)

    Syed Masood

    2016-12-01

    Full Text Available In this paper, we will propose the most general form of the deformation of Heisenberg algebra motivated by the generalized uncertainty principle. This deformation of the Heisenberg algebra will deform all quantum mechanical systems. The form of the generalized uncertainty principle used to motivate these results will be motivated by the space fractional quantum mechanics, and non-locality in quantum mechanical systems. We also analyse a specific limit of this generalized deformation for one dimensional system, and in that limit, a nonlocal deformation of the momentum operator generates a local deformation of all one dimensional quantum mechanical systems. We analyse the low energy effects of this deformation on a harmonic oscillator, Landau levels, Lamb shift, and potential barrier. We also demonstrate that this deformation leads to a discretization of space.

  8. The most general form of deformation of the Heisenberg algebra from the generalized uncertainty principle

    Science.gov (United States)

    Masood, Syed; Faizal, Mir; Zaz, Zaid; Ali, Ahmed Farag; Raza, Jamil; Shah, Mushtaq B.

    2016-12-01

    In this paper, we will propose the most general form of the deformation of Heisenberg algebra motivated by the generalized uncertainty principle. This deformation of the Heisenberg algebra will deform all quantum mechanical systems. The form of the generalized uncertainty principle used to motivate these results will be motivated by the space fractional quantum mechanics, and non-locality in quantum mechanical systems. We also analyse a specific limit of this generalized deformation for one dimensional system, and in that limit, a nonlocal deformation of the momentum operator generates a local deformation of all one dimensional quantum mechanical systems. We analyse the low energy effects of this deformation on a harmonic oscillator, Landau levels, Lamb shift, and potential barrier. We also demonstrate that this deformation leads to a discretization of space.

  9. The Most General Form of Deformation of the Heisenberg Algebra from the Generalized Uncertainty Principle

    CERN Document Server

    Masood, Syed; Zaz, Zaid; Ali, Ahmed Farag; Raza, Jamil; Shah, Mushtaq B

    2016-01-01

    In this paper, we will propose the most general form of the deformation of Heisenberg algebra motivated by the generalized uncertainty principle. This deformation of the Heisenberg algebra will deform all quantum mechanical systems. The form of the generalized uncertainty principle used to motivate these results will be motivated by space fractional quantum mechanics and non-locality in quantum mechanical systems. We also analyse a specific limit of this generalized deformation for one dimensional system, and in that limit, a nonlocal deformation of the momentum operator generates a local deformation of all one dimensional quantum mechanical systems. We analyse the low energy effects of this deformation on a harmonic oscillator, Landau levels, Lamb shift, and potential barrier. We also demonstrate that this deformation leads to a discretization of space.

  10. Topologies of the conditional ancestral trees and full-likelihood-based inference in the general coalescent tree framework.

    Science.gov (United States)

    Sargsyan, Ori

    2010-08-01

    The general coalescent tree framework is a family of models for determining ancestries among random samples of DNA sequences at a nonrecombining locus. The ancestral models included in this framework can be derived under various evolutionary scenarios. Here, a computationally tractable full-likelihood-based inference method for neutral polymorphisms is presented, using the general coalescent tree framework and the infinite-sites model for mutations in DNA sequences. First, an exact sampling scheme is developed to determine the topologies of conditional ancestral trees. However, this scheme has some computational limitations and to overcome these limitations a second scheme based on importance sampling is provided. Next, these schemes are combined with Monte Carlo integrations to estimate the likelihood of full polymorphism data, the ages of mutations in the sample, and the time of the most recent common ancestor. In addition, this article shows how to apply this method for estimating the likelihood of neutral polymorphism data in a sample of DNA sequences completely linked to a mutant allele of interest. This method is illustrated using the data in a sample of DNA sequences at the APOE gene locus.

  11. Horizon Quantum Mechanics of Generalized Uncertainty Principle Black Holes

    CERN Document Server

    Manfredi, Luciano

    2016-01-01

    We study the Horizon Wavefunction (HWF) description of a generalized uncertainty principle inspired metric that admits sub-Planckian black holes, where the black hole mass $m$ is replaced by $M = m\\left( 1 + \\frac{\\beta}{2} \\frac{M_{\\rm Pl}^2}{m^2} \\right)$. Considering the case of a wave-packet shaped by a Gaussian distribution, we compute the HWF and the probability ${\\cal {P}}_{BH}$ that the source is a (quantum) black hole, i.e., that it lies within its horizon radius. The case $\\beta0$, where a minimum in ${\\cal {P}}_{BH}$ is encountered, thus meaning that every particle has some probability of decaying to a black hole. Furthermore, for sufficiently large $\\beta$ we find that every particle is a quantum black hole, in agreement with the intuitive effect of increasing $\\beta$, which creates larger $M$ and $R_{H}$ terms. This is likely due to a "dimensional reduction" feature of the model, where the black hole characteristics for sub-Planckian black holes mimic those in $(1+1)$-dimensions and the horizon s...

  12. Horizon Wavefunction of Generalized Uncertainty Principle Black Holes

    Directory of Open Access Journals (Sweden)

    Luciano Manfredi

    2016-01-01

    Full Text Available We study the Horizon Wavefunction (HWF description of a Generalized Uncertainty Principle inspired metric that admits sub-Planckian black holes, where the black hole mass m is replaced by M=m1+β/2MPl2/m2. Considering the case of a wave-packet shaped by a Gaussian distribution, we compute the HWF and the probability PBH that the source is a (quantum black hole, that is, that it lies within its horizon radius. The case β0, where a minimum in PBH is encountered, thus meaning that every particle has some probability of decaying to a black hole. Furthermore, for sufficiently large β we find that every particle is a quantum black hole, in agreement with the intuitive effect of increasing β, which creates larger M and RH terms. This is likely due to a “dimensional reduction” feature of the model, where the black hole characteristics for sub-Planckian black holes mimic those in (1+1 dimensions and the horizon size grows as RH~M-1.

  13. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  14. Intolerance of uncertainty, causal uncertainty, causal importance, self-concept clarity and their relations to generalized anxiety disorder.

    Science.gov (United States)

    Kusec, Andrea; Tallon, Kathleen; Koerner, Naomi

    2016-06-01

    Although numerous studies have provided support for the notion that intolerance of uncertainty plays a key role in pathological worry (the hallmark feature of generalized anxiety disorder (GAD)), other uncertainty-related constructs may also have relevance for the understanding of individuals who engage in pathological worry. Three constructs from the social cognition literature, causal uncertainty, causal importance, and self-concept clarity, were examined in the present study to assess the degree to which these explain unique variance in GAD, over and above intolerance of uncertainty. N = 235 participants completed self-report measures of trait worry, GAD symptoms, and uncertainty-relevant constructs. A subgroup was subsequently classified as low in GAD symptoms (n = 69) or high in GAD symptoms (n = 54) based on validated cut scores on measures of trait worry and GAD symptoms. In logistic regressions, only elevated intolerance of uncertainty and lower self-concept clarity emerged as unique correlates of high (vs. low) GAD symptoms. The possible role of self-concept uncertainty in GAD and the utility of integrating social cognition theories and constructs into clinical research on intolerance of uncertainty are discussed.

  15. Strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YiN; Changming; ZHAO; Lincheng

    2005-01-01

    In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni=1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.

  16. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  17. Likelihood Inference under Generalized Hybrid Censoring Scheme with Comp eting Risks

    Institute of Scientific and Technical Information of China (English)

    MAO Song; SHI Yi-min

    2016-01-01

    Statistical inference is developed for the analysis of generalized type-II hybrid censoring data under exponential competing risks model. In order to solve the problem that approximate methods make unsatisfactory performances in the case of small sample size, we establish the exact conditional distributions of estimators for parameters by conditional moment generating function(CMGF). Furthermore, confidence intervals(CIs) are constructed by exact distributions, approximate distributions as well as bootstrap method respectively, and their performances are evaluated by Monte Carlo simulations. And finally, a real data set is analyzed to illustrate all the methods developed here.

  18. An asymptotic approximation of the marginal likelihood for general Markov models

    CERN Document Server

    Zwiernik, Piotr

    2010-01-01

    The standard Bayesian Information Criterion (BIC) is derived under regularity conditions which are not always satisfied by the graphical models with hidden variables. In this paper we derive the BIC score for Bayesian networks in the case of binary data and when the underlying graph is a rooted tree and all the inner nodes represent hidden variables. This provides a direct generalization of a similar formula given by Rusakov and Geiger for naive Bayes models. The main tool used in this paper is a connection between asymptotic approximation of Laplace integrals and the real log-canonical threshold.

  19. Supersymmetry breaking as a new source for the generalized uncertainty principle

    Science.gov (United States)

    Faizal, Mir

    2016-06-01

    In this letter, we will demonstrate that the breaking of supersymmetry by a non-anticommutative deformation can be used to generate the generalized uncertainty principle. We will analyze the physical reasons for this observation, in the framework of string theory. We also discuss the relation between the generalized uncertainty principle and the Lee-Wick field theories.

  20. String theory, scale relativity and the generalized uncertainty principle

    CERN Document Server

    Castro, C

    1995-01-01

    An extension/ modification of the Stringy Heisenberg Uncertainty principle is derived within the framework of the theory of Special Scale-Relativity proposed by Nottale. Based on the fractal structure of two dimensional Quantum Gravity which has attracted considerable interest recently we conjecture that the underlying fundamental principle behind String theory should be based on an extension of Scale Relativity where both dynamics as well as scales are incorporated in the same footing.

  1. Application of a generalized likelihood ratio test statistic to MAGIC data

    CERN Document Server

    Klepser, S; 10.1063/1.4772359

    2012-01-01

    The commonly used detection test statistic for Cherenkov telescope data is Li & Ma (1983), Eq. 17. It evaluates the compatibility of event counts in an on-source region with those in a representative off-region. It does not exploit the typically known gamma-ray point spread function (PSF) of a system, and in practice its application requires either assumptions on the symmetry of the acceptance across the field of view, orMonte Carlo simulations.MAGIC has an azimuth-dependent, asymmetric acceptance which required a careful review of detection statistics. Besides an adapted Li & Ma based technique, the recently presented generalized LRT statistic of [1] is now in use. It is more flexible, more sensitive and less systematics-affected, because it is highly customized for multi-pointing Cherenkov telescope data with a known PSF. We present the application of this new method to archival MAGIC data and compare it to the other, Li&Ma-based method.

  2. General Analyzing and Research on Uncertainty of Multi-Scale Representation for Street-Block Settlement

    Science.gov (United States)

    Xu, F.; Niu, J.; Chi, Z.; Xie, W.

    2013-05-01

    Analyzing and evaluating the reliability of multi-scale representation of spatial data are already becoming an important issue of the current digital cartography and GIS. Settlement place is the main content of maps. For this reason, studying on the uncertainty of multi-scale representation of settlement place is one of important contents of the uncertainty of multi-scale representation of spatial data. In this paper, uncertainty of multi-scale representation of street-block settlement was get comprehensive analysis and system research. This paper holds that map generalization is the essential cause leading to uncertainty of multi-scale representation of streetblock settlement. First, it is explored of essence and types of uncertainty on multi-scale representation of street-block settlement, and it divides these uncertainties into four large classes and seven subclasses. Second, among all kinds of uncertainties of multi-scale representation of street-block settlement, this paper mainly studies the uncertainty of settlement of street-block symbolic representation, and establishes the evaluation content and evaluation indexes and computing method of uncertainty of street-block and street network generalization and building generalization. The result can use for evaluating the good and bad of scale transfer methods and the uncertainty of products of multi-scale representation of street-block settlement.

  3. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  4. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547

  5. Fine Structure Constant, Domain Walls, and Generalized Uncertainty Principle in the Universe

    Directory of Open Access Journals (Sweden)

    Luigi Tedesco

    2011-01-01

    Full Text Available We study the corrections to the fine structure constant from the generalized uncertainty principle in the spacetime of a domain wall. We also calculate the corrections to the standard formula to the energy of the electron in the hydrogen atom to the ground state, in the case of spacetime of a domain wall and generalized uncertainty principle. The results generalize the cases known in literature.

  6. Context, Experience, Expectation, and Action—Towards an Empirically Grounded, General Model for Analyzing Biographical Uncertainty

    Directory of Open Access Journals (Sweden)

    Herwig Reiter

    2010-01-01

    Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120

  7. Generalized Uncertainty Principle and Recent Cosmic Inflation Observations

    CERN Document Server

    Tawfik, Abdel Nasser

    2014-01-01

    The recent background imaging of cosmic extragalactic polarization (BICEP2) observations are believed as an evidence for the cosmic inflation. BICEP2 provided a first direct evidence for the inflation, determined its energy scale and debriefed witnesses for the quantum gravitational processes. The ratio of scalar-to-tensor fluctuations $r$ which is the canonical measurement of the gravitational waves, was estimated as $r=0.2_{-0.05}^{+0.07}$. Apparently, this value agrees well with the upper bound value corresponding to PLANCK $r\\leq 0.012$ and to WMAP9 experiment $r=0.2$. It is believed that the existence of a minimal length is one of the greatest predictions leading to modifications in the Heisenberg uncertainty principle or a GUP at the Planck scale. In the present work, we investigate the possibility of interpreting recent BICEP2 observations through quantum gravity or GUP. We estimate the slow-roll parameters, the tensorial and the scalar density fluctuations which are characterized by the scalar field $...

  8. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  9. Project management under uncertainty beyond beta: The generalized bicubic distribution

    Directory of Open Access Journals (Sweden)

    José García Pérez

    2016-01-01

    Full Text Available The beta distribution has traditionally been employed in the PERT methodology and generally used for modeling bounded continuous random variables based on expert’s judgment. The impossibility of estimating four parameters from the three values provided by the expert when the beta distribution is assumed to be the underlying distribution has been widely debated. This paper presents the generalized bicubic distribution as a good alternative to the beta distribution since, when the variance depends on the mode, the generalized bicubic distribution approximates the kurtosis of the Gaussian distribution better than the beta distribution. In addition, this distribution presents good properties in the PERT methodology in relation to moderation and conservatism criteria. Two empirical applications are presented to demonstrate the adequateness of this new distribution.

  10. Semiclassical corrections to black hole entropy and the generalized uncertainty principle

    Energy Technology Data Exchange (ETDEWEB)

    Bargueño, Pedro, E-mail: p.bargueno@uniandes.edu.co [Departamento de Física, Universidad de los Andes, Apartado Aéreo 4976, Bogotá, Distrito Capital (Colombia); Vagenas, Elias C., E-mail: elias.vagenas@ku.edu.kw [Theoretical Physics Group, Department of Physics, Kuwait University, P.O. Box 5969, Safat 13060 (Kuwait)

    2015-03-06

    In this paper, employing the path integral method in the framework of a canonical description of a Schwarzschild black hole, we obtain the corrected inverse temperature and entropy of the black hole. The corrections are those coming from the quantum effects as well as from the Generalized Uncertainty Principle effects. Furthermore, an equivalence between the polymer quantization and the Generalized Uncertainty Principle description is shown provided the parameters characterizing these two descriptions are proportional.

  11. Time-dependent q-deformed bi-coherent states for generalized uncertainty relations

    Science.gov (United States)

    Gouba, Laure

    2015-07-01

    We consider the time-dependent bi-coherent states that are essentially the Gazeau-Klauder coherent states for the two dimensional noncommutative harmonic oscillator. Starting from some q-deformations of the oscillator algebra for which the entire deformed Fock space can be constructed explicitly, we define the q-deformed bi-coherent states. We verify the generalized Heisenberg's uncertainty relations projected onto these states. For the initial value in time, the states are shown to satisfy a generalized version of Heisenberg's uncertainty relations. For the initial value in time and for the parameter of noncommutativity θ = 0, the inequalities are saturated for the simultaneous measurement of the position-momentum observables. When the time evolves, the uncertainty products are different from their values at the initial time and do not always respect the generalized uncertainty relations.

  12. A survey of resilience, burnout, and tolerance of uncertainty in Australian general practice registrars

    Directory of Open Access Journals (Sweden)

    Cooke Georga PE

    2013-01-01

    Full Text Available Abstract Background Burnout and intolerance of uncertainty have been linked to low job satisfaction and lower quality patient care. While resilience is related to these concepts, no study has examined these three concepts in a cohort of doctors. The objective of this study was to measure resilience, burnout, compassion satisfaction, personal meaning in patient care and intolerance of uncertainty in Australian general practice (GP registrars. Methods We conducted a paper-based cross-sectional survey of GP registrars in Australia from June to July 2010, recruited from a newsletter item or registrar education events. Survey measures included the Resilience Scale-14, a single-item scale for burnout, Professional Quality of Life (ProQOL scale, Personal Meaning in Patient Care scale, Intolerance of Uncertainty-12 scale, and Physician Response to Uncertainty scale. Results 128 GP registrars responded (response rate 90%. Fourteen percent of registrars were found to be at risk of burnout using the single-item scale for burnout, but none met the criteria for burnout using the ProQOL scale. Secondary traumatic stress, general intolerance of uncertainty, anxiety due to clinical uncertainty and reluctance to disclose uncertainty to patients were associated with being at higher risk of burnout, but sex, age, practice location, training duration, years since graduation, and reluctance to disclose uncertainty to physicians were not. Only ten percent of registrars had high resilience scores. Resilience was positively associated with compassion satisfaction and personal meaning in patient care. Resilience was negatively associated with burnout, secondary traumatic stress, inhibitory anxiety, general intolerance to uncertainty, concern about bad outcomes and reluctance to disclose uncertainty to patients. Conclusions GP registrars in this survey showed a lower level of burnout than in other recent surveys of the broader junior doctor population in both Australia

  13. Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets

    CERN Document Server

    Kuppinger, Patrick; Bölcskei, Helmut

    2011-01-01

    We present an uncertainty relation for the representation of signals in two different general (possibly redundant or incomplete) signal sets. This uncertainty relation is relevant for the analysis of signals containing two distinct features each of which can be described sparsely in a suitable general signal set. Furthermore, the new uncertainty relation is shown to lead to improved sparsity thresholds for recovery of signals that are sparse in general dictionaries. Specifically, our results improve on the well-known $(1+1/d)/2$-threshold for dictionaries with coherence $d$ by up to a factor of two. Furthermore, we provide probabilistic recovery guarantees for pairs of general dictionaries that also allow us to understand which parts of a general dictionary one needs to randomize over to "weed out" the sparsity patterns that prohibit breaking the square-root bottleneck.

  14. An uncertainty relation in terms of generalized metric adjusted skew information and correlation measure

    Science.gov (United States)

    Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang

    2016-12-01

    The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| Corr_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.

  15. An uncertainty relation in terms of generalized metric adjusted skew information and correlation measure

    Science.gov (United States)

    Fan, Ya-Jing; Cao, Huai-Xin; Meng, Hui-Xian; Chen, Liang

    2016-09-01

    The uncertainty principle in quantum mechanics is a fundamental relation with different forms, including Heisenberg's uncertainty relation and Schrödinger's uncertainty relation. In this paper, we prove a Schrödinger-type uncertainty relation in terms of generalized metric adjusted skew information and correlation measure by using operator monotone functions, which reads, U_ρ ^{(g,f)}(A)U_ρ ^{(g,f)}(B)≥ f(0)^2l/k| {Corr}_ρ ^{s(g,f)}(A,B)| ^2 for some operator monotone functions f and g, all n-dimensional observables A, B and a non-singular density matrix ρ . As applications, we derive some new uncertainty relations for Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information.

  16. Uncertainty analysis of statistical downscaling models using general circulation model over an international wetland

    Science.gov (United States)

    Etemadi, H.; Samadi, S.; Sharifikia, M.

    2014-06-01

    Regression-based statistical downscaling model (SDSM) is an appropriate method which broadly uses to resolve the coarse spatial resolution of general circulation models (GCMs). Nevertheless, the assessment of uncertainty propagation linked with climatic variables is essential to any climate change impact study. This study presents a procedure to characterize uncertainty analysis of two GCM models link with Long Ashton Research Station Weather Generator (LARS-WG) and SDSM in one of the most vulnerable international wetland, namely "Shadegan" in an arid region of Southwest Iran. In the case of daily temperature, uncertainty is estimated by comparing monthly mean and variance of downscaled and observed daily data at a 95 % confidence level. Uncertainties were then evaluated from comparing monthly mean dry and wet spell lengths and their 95 % CI in daily precipitation downscaling using 1987-2005 interval. The uncertainty results indicated that the LARS-WG is the most proficient model at reproducing various statistical characteristics of observed data at a 95 % uncertainty bounds while the SDSM model is the least capable in this respect. The results indicated a sequences uncertainty analysis at three different climate stations and produce significantly different climate change responses at 95 % CI. Finally the range of plausible climate change projections suggested a need for the decision makers to augment their long-term wetland management plans to reduce its vulnerability to climate change impacts.

  17. An exploration of the uncertainty relation satisfied by BP network learning ability and generalization ability

    Institute of Scientific and Technical Information of China (English)

    LI Zuoyong; PENG Lihong

    2004-01-01

    This paper analyses the intrinsic relationship between the BP network learning ability and generalization ability and other influencing factors when the overfit occurs, and introduces the multiple correlation coefficient to describe the complexity of samples; it follows the calculation uncertainty principle and the minimum principle of neural network structural design, provides an analogy of the general uncertainty relation in the information transfer process, and ascertains the uncertainty relation between the training relative error of the training sample set, which reflects the network learning ability,and the test relative error of the test sample set, which represents the network generalization ability; through the simulation of BP network overfit numerical modeling test with different types of functions, it is ascertained that the overfit parameter q in the relation generally has a span of 7×10-3 to 7 × 10-2; the uncertainty relation then helps to obtain the formula for calculating the number of hidden nodes of a network with good generalization ability under the condition that multiple correlation coefficient is used to describe sample complexity and the given approximation error requirement is satisfied;the rationality of this formula is verified; this paper also points out that applying the BP network to the training process of the given sample set is the best method for stopping training that improves the generalization ability.

  18. Nonlinear Schrödinger equation from generalized exact uncertainty principle

    Science.gov (United States)

    Rudnicki, Łukasz

    2016-09-01

    Inspired by the generalized uncertainty principle, which adds gravitational effects to the standard description of quantum uncertainty, we extend the exact uncertainty principle approach by Hall and Reginatto (2002 J. Phys. A: Math. Gen. 35 3289), and obtain a (quasi)nonlinear Schrödinger equation. This quantum evolution equation of unusual form, enjoys several desired properties like separation of non-interacting subsystems or plane-wave solutions for free particles. Starting with the harmonic oscillator example, we show that every solution of this equation respects the gravitationally induced minimal position uncertainty proportional to the Planck length. Quite surprisingly, our result successfully merges the core of classical physics with non-relativistic quantum mechanics in its extremal form. We predict that the commonly accepted phenomenon, namely a modification of a free-particle dispersion relation due to quantum gravity might not occur in reality.

  19. Time-dependent q-deformed coherent states for generalized uncertainty relations

    CERN Document Server

    Dey, Sanjib; Gouba, Laure; Castro, Paulo G

    2012-01-01

    We investigate properties of generalized time-dependent q-deformed coherent states for a noncommutative harmonic oscillator. The states are shown to satisfy a generalized version of Heisenberg's uncertainty relations. For the initial value in time the states are demonstrated to be squeezed, i.e. the inequalities are saturated, whereas when time evolves the uncertainty product oscillates away from this value albeit still respecting the relations. For the canonical variables on a noncommutative space we verify explicitly that Ehrenfest's theorem hold at all times. We conjecture that the model exhibits revival times to infinite order. Explicit sample computations for the fractional revival times and superrevival times are presented.

  20. The entropy of the noncommutative acoustic black hole based on generalized uncertainty principle

    CERN Document Server

    Anacleto, M A; Passos, E; Santos, W P

    2014-01-01

    In this paper we investigate statistics entropy of a 3-dimensional rotating acoustic black hole based on generalized uncertainty principle. In our results we obtain an area entropy and a correction term associated with the noncommutative acoustic black hole when $ \\lambda $ introduced in the generalized uncertainty principle takes a specific value. However, in this method, is not need to introduce the ultraviolet cut-off and divergences are eliminated. Moreover, the small mass approximation is not necessary in the original brick-wall model.

  1. Robust adaptive synchronization of general dynamical networks with multiple delays and uncertainties

    Indian Academy of Sciences (India)

    LU YIMING; HE PING; MA SHU-HUA; LI GUO-ZHI; MOBAYBEN SALEH

    2016-06-01

    In this article, a general complex dynamical network which contains multiple delays and uncertainties is introduced, which contains time-varying coupling delays, time-varying node delay, and uncertainties of both the inner- and outer-coupling matrices. A robust adaptive synchronization scheme for these general complex networks with multiple delays and uncertainties is established and raised by employing the robust adaptive control principle and the Lyapunov stability theory. We choose some suitable adaptive synchronization controllers to ensure the robust synchronization of this dynamical network. The numerical simulations of the time-delay Lorenz chaotic system as local dynamical node are provided to observe and verify the viability and productivity of the theoretical research in this paper. Compared to the achievement of previous research, theresearch in this paper seems quite comprehensive and universal.

  2. Generalized Uncertainty Principle Corrections to the Simple Harmonic Oscillator in Phase Space

    CERN Document Server

    Das, Saurya; Walton, Mark A

    2016-01-01

    We compute Wigner functions for the harmonic oscillator including corrections from generalized uncertainty principles (GUPs), and study the corresponding marginal probability densities and other properties. We show that the GUP corrections to the Wigner functions can be significant, and comment on their potential measurability in the laboratory.

  3. (Anti-)de Sitter Black Hole Thermodynamics and the Generalized Uncertainty Principle

    CERN Document Server

    Bolen, B; Bolen, Brett; Cavaglia, Marco

    2004-01-01

    We extend the derivation of the Hawking temperature of a Schwarzschild black hole via the Heisenberg uncertainty principle to the de Sitter and anti-de Sitter spacetimes. The thermodynamics of the Schwarzschild-(anti-)de Sitter black holes is obtained from the generalized uncertainty principle of string theory and non-commutative geometry. This may explain why the thermodynamics of (anti-)de Sitter-like black holes admits a holographic description in terms of a dual quantum conformal field theory, whereas the thermodynamics of Schwarzschild-like black holes does not.

  4. Communicating and dealing with uncertainty in general practice: the association with neuroticism.

    Directory of Open Access Journals (Sweden)

    Antonius Schneider

    Full Text Available Diagnostic reasoning in primary care setting where presented problems and patients are mostly unselected appears as a complex process. The aim was to develop a questionnaire to describe how general practitioners (GPs deal with uncertainty to gain more insight into the decisional process. The association of personality traits with medical decision making was investigated additionally.Raw items were identified by literature research and focus group. Items were improved by interviewing ten GPs with thinking-aloud-method. A personal case vignette related to a complex and uncertainty situation was introduced. The final questionnaire was administered to 228 GPs in Germany. Factorial validity was calculated with explorative and confirmatory factor analysis. The results of the Communicating and Dealing with Uncertainty (CoDU-questionnaire were compared with the scales of the 'Physician Reaction to Uncertainty' (PRU questionnaire and with the personality traits which were determined with the Big Five Inventory (BFI-K.The items could be assigned to four scales with varying internal consistency, namely 'communicating uncertainty' (Cronbach alpha 0.79, 'diagnostic action' (0.60, 'intuition' (0.39 and 'extended social anamnesis' (0.69. Neuroticism was positively associated with all PRU scales 'anxiety due to uncertainty' (Pearson correlation 0.487, 'concerns about bad outcomes' (0.488, 'reluctance to disclose uncertainty to patients' (0.287, 'reluctance to disclose mistakes to physicians' (0.212 and negatively associated with the CoDU scale 'communicating uncertainty' (-0.242 (p<0.01 for all. 'Extraversion' (0.146; p<0.05, 'agreeableness' (0.145, p<0.05, 'conscientiousness' (0.168, p<0.05 and 'openness to experience' (0.186, p<0.01 were significantly positively associated with 'communicating uncertainty'. 'Extraversion' (0.162, 'consciousness' (0.158 and 'openness to experience' (0.155 were associated with 'extended social anamnesis' (p<0.05.The

  5. Impacts of Generalized Uncertainty Principle on Black Hole Thermodynamics and Salecker-Wigner Inequalities

    CERN Document Server

    Tawfik, A

    2013-01-01

    We investigate the impacts of Generalized Uncertainty Principle (GUP) proposed by some approaches to quantum gravity such as String Theory and Doubly Special Relativity on black hole thermodynamics and Salecker-Wigner inequalities. Utilizing Heisenberg uncertainty principle, the Hawking temperature, Bekenstein entropy, specific heat, emission rate and decay time are calculated. As the evaporation entirely eats up the black hole mass, the specific heat vanishes and the temperature approaches infinity with an infinite radiation rate. It is found that the GUP approach prevents the black hole from the entire evaporation. It implies the existence of remnants at which the specific heat vanishes. The same role is played by the Heisenberg uncertainty principle in constructing the hydrogen atom. We discuss how the linear GUP approach solves the entire-evaporation-problem. Furthermore, the black hole lifetime can be estimated using another approach; the Salecker-Wigner inequalities. Assuming that the quantum position u...

  6. What Is Going On Around Here? Intolerance of Uncertainty Predicts Threat Generalization

    Science.gov (United States)

    Morriss, Jayne; Macdonald, Birthe; van Reekum, Carien M.

    2016-01-01

    Attending to stimuli that share perceptual similarity to learned threats is an adaptive strategy. However, prolonged threat generalization to cues signalling safety is considered a core feature of pathological anxiety. One potential factor that may sustain over-generalization is sensitivity to future threat uncertainty. To assess the extent to which Intolerance of Uncertainty (IU) predicts threat generalization, we recorded skin conductance in 54 healthy participants during an associative learning paradigm, where threat and safety cues varied in perceptual similarity. Lower IU was associated with stronger discrimination between threat and safety cues during acquisition and extinction. Higher IU, however, was associated with generalized responding to threat and safety cues during acquisition, and delayed discrimination between threat and safety cues during extinction. These results were specific to IU, over and above other measures of anxious disposition. These findings highlight: (1) a critical role of uncertainty-based mechanisms in threat generalization, and (2) IU as a potential risk factor for anxiety disorder development. PMID:27167217

  7. Hydrologic evaluation of a Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    Science.gov (United States)

    Sarachi, S.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    Development of satellite based precipitation retrieval algorithms and using them in hydroclimatic studies have been of great interest to hydrologists. It is important to understand the uncertainty associated with precipitation products and how they further contribute to the variability in stream flow simulation. In this study a mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based (PERSIANN) and stage IV radar rainfall. The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. This uncertainty model is evaluated using data from National Weather Service (NWS) Distributed Hydrologic Model Intercomparison Project - Phase 2 (DMIP 2) basin over Illinois River basin south of Siloam, OK. This data covers the time period of 2006 to 2008.The uncertainty range of precipitation is estimated. The impact of precipitation uncertainty to the stream flow estimation is demonstrated by Monte Carlo simulation of precipitation forcing in the Sacramento Soil Moisture Accounting (SAC-SMA) model. The results show that using precipitation along with its uncertainty distribution as forcing to SAC-SMA make it possible to have an estimation of the uncertainty associated with the stream flow simulation ( in this case study %90 confidence interval is used). The mean of this stream flow confidence interval is compared to the reference stream flow for evaluation of the model and the results show that this method helps to better estimate the variability of the stream flow simulation along with its statistics e.g. percent bias and root mean squared error.

  8. A generalized Gaussian distribution based uncertainty sampling approach and its application in actual evapotranspiration assimilation

    Science.gov (United States)

    Chen, Shaohui

    2017-09-01

    It is extremely important for ensemble based actual evapotranspiration assimilation (AETA) to accurately sample the uncertainties. Traditionally, the perturbing ensemble is sampled from one prescribed multivariate normal distribution (MND). However, MND is under-represented in capturing the non-MND uncertainties caused by the nonlinear integration of land surface models while these hypernormal uncertainties can be better characterized by generalized Gaussian distribution (GGD) which takes MND as the special case. In this paper, one novel GGD based uncertainty sampling approach is outlined to create one hypernormal ensemble for the purpose of better improving land surface models with observation. With this sampling method, various assimilation methods can be tested in a common equation form. Experimental results on Noah LSM show that the outlined method is more powerful than MND in reducing the misfit between model forecasts and observations in terms of actual evapotranspiration, skin temperature, and soil moisture/ temperature in the 1st layer, and also indicate that the energy and water balances constrain ensemble based assimilation to simultaneously optimize all state and diagnostic variables. Overall evaluation expounds that the outlined approach is a better alternative than the traditional MND method for seizing assimilation uncertainties, and it can serve as a useful tool for optimizing hydrological models with data assimilation.

  9. Development of a General Package for Resolution of Uncertainty-Related Issues in Reservoir Engineering

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2017-02-01

    Full Text Available Reservoir simulations always involve a large number of parameters to characterize the properties of formation and fluid, many of which are subject to uncertainties owing to spatial heterogeneity and insufficient measurements. To provide solutions to uncertainty-related issues in reservoir simulations, a general package called GenPack has been developed. GenPack includes three main functions required for full stochastic analysis in petroleum engineering, generation of random parameter fields, predictive uncertainty quantifications and automatic history matching. GenPack, which was developed in a modularized manner, is a non-intrusive package which can be integrated with any existing commercial simulator in petroleum engineering to facilitate its application. Computational efficiency can be improved both theoretically by introducing a surrogate model-based probabilistic collocation method, and technically by using parallel computing. A series of synthetic cases are designed to demonstrate the capability of GenPack. The test results show that the random parameter field can be flexibly generated in a customized manner for petroleum engineering applications. The predictive uncertainty can be reasonably quantified and the computational efficiency is significantly improved. The ensemble Kalman filter (EnKF-based automatic history matching method can improve predictive accuracy and reduce the corresponding predictive uncertainty by accounting for observations.

  10. Uncertainties of the 50-year wind from short time series using generalized extreme value distribution and generalized Pareto distribution

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, Jakob; Rathmann, Ole

    2015-01-01

    as a guideline for applying GEVD and GPD to wind time series of limited length. The data analysis shows that, with reasonable choice of relevant parameters, GEVD and GPD give consistent estimates of the return winds. For GEVD, the base period should be chosen in accordance with the occurrence of the extreme wind......This study examines the various sources to the uncertainties in the application of two widely used extreme value distribution functions, the generalized extreme value distribution (GEVD) and the generalized Pareto distribution (GPD). The study is done through the analysis of measurements from...

  11. The Quark-Gluon Plasma Equation of State and the Generalized Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    L. I. Abou-Salem

    2015-01-01

    Full Text Available The quark-gluon plasma (QGP equation of state within a minimal length scenario or Generalized Uncertainty Principle (GUP is studied. The Generalized Uncertainty Principle is implemented on deriving the thermodynamics of ideal QGP at a vanishing chemical potential. We find a significant effect for the GUP term. The main features of QCD lattice results were quantitatively achieved in case of nf=0, nf=2, and nf=2+1 flavors for the energy density, the pressure, and the interaction measure. The exciting point is the large value of bag pressure especially in case of nf=2+1 flavor which reflects the strong correlation between quarks in this bag which is already expected. One can notice that the asymptotic behavior which is characterized by Stephan-Boltzmann limit would be satisfied.

  12. Robust sliding mode control of general time-varying delay stochastic systems with structural uncertainties

    Institute of Scientific and Technical Information of China (English)

    Sheng-Guo WANG; Libin BAI; Mingzhi CHEN

    2014-01-01

    This paper presents a new robust sliding mode control (SMC) method with well-developed theoretical proof for general uncertain time-varying delay stochastic systems with structural uncertainties and the Brownian noise (Wiener process). The key features of the proposed method are to apply singular value decomposition (SVD) to all structural uncertainties and to introduce adjustable parameters for control design along with the SMC method. It leads to a less-conservative condition for robust stability and a new robust controller for the general uncertain stochastic systems via linear matrix inequality (LMI) forms. The system states are able to reach the SMC switching surface as guaranteed in probability 1. Furthermore, it is theoretically proved that the proposed method with the SVD and adjustable parameters is less conservatism than the method without the SVD. The paper is mainly to provide all strict theoretical proofs for the method and results.

  13. Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts

    2006-05-01

    This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and

  14. Bounds for reference-frame independent protocols in quantum cryptography using generalized uncertainty relations

    CERN Document Server

    Le, Thinh Phuc; Scarani, Valerio

    2011-01-01

    We define a family of reference-frame-independent quantum cryptography protocols for arbitrary dimensional signals. The generalized entropic uncertainty relations [M. Tomamichel and R. Renner, Phys. Rev. Lett. 106, 110506 (2011)] are used for the first time to derive security bounds for protocols which use more than two measurements and combine the statistics in a non-linear parameter. This shows the power and versatility of this technique compared to the heavier, though usually tighter, conventional techniques.

  15. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  16. Impacts of generalized uncertainty principle on black hole thermodynamics and Salecker-Wigner inequalities

    Energy Technology Data Exchange (ETDEWEB)

    Tawfik, A., E-mail: a.tawfik@eng.mti.edu.eg [Egyptian Center for Theoretical Physics (ECTP), MTI University, 11571 Cairo (Egypt)

    2013-07-01

    We investigate the impacts of Generalized Uncertainty Principle (GUP) proposed by some approaches to quantum gravity such as String Theory and Doubly Special Relativity on black hole thermodynamics and Salecker-Wigner inequalities. Utilizing Heisenberg uncertainty principle, the Hawking temperature, Bekenstein entropy, specific heat, emission rate and decay time are calculated. As the evaporation entirely eats up the black hole mass, the specific heat vanishes and the temperature approaches infinity with an infinite radiation rate. It is found that the GUP approach prevents the black hole from the entire evaporation. It implies the existence of remnants at which the specific heat vanishes. The same role is played by the Heisenberg uncertainty principle in constructing the hydrogen atom. We discuss how the linear GUP approach solves the entire-evaporation-problem. Furthermore, the black hole lifetime can be estimated using another approach; the Salecker-Wigner inequalities. Assuming that the quantum position uncertainty is limited to the minimum wavelength of measuring signal, Wigner second inequality can be obtained. If the spread of quantum clock is limited to some minimum value, then the modified black hole lifetime can be deduced. Based on linear GUP approach, the resulting lifetime difference depends on black hole relative mass and the difference between black hole mass with and without GUP is not negligible.

  17. A Generalized Fuzzy Integer Programming Approach for Environmental Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Y. R. Fan

    2014-01-01

    Full Text Available In this study, a generalized fuzzy integer programming (GFIP method is developed for planning waste allocation and facility expansion under uncertainty. The developed method can (i deal with uncertainties expressed as fuzzy sets with known membership functions regardless of the shapes (linear or nonlinear of these membership functions, (ii allow uncertainties to be directly communicated into the optimization process and the resulting solutions, and (iii reflect dynamics in terms of waste-flow allocation and facility-capacity expansion. A stepwise interactive algorithm (SIA is proposed to solve the GFIP problem and generate solutions expressed as fuzzy sets. The procedures of the SIA method include (i discretizing the membership function grade of fuzzy parameters into a set of α-cut levels; (ii converting the GFIP problem into an inexact mixed-integer linear programming (IMILP problem under each α-cut level; (iii solving the IMILP problem through an interactive algorithm; and (iv approximating the membership function for decision variables through statistical regression methods. The developed GFIP method is applied to a municipal solid waste (MSW management problem to facilitate decision making on waste flow allocation and waste-treatment facilities expansion. The results, which are expressed as discrete or continuous fuzzy sets, can help identify desired alternatives for managing MSW under uncertainty.

  18. The Generalized Uncertainty Principle, entropy bounds and black hole (non-)evaporation in a thermal bath

    CERN Document Server

    Custodio, P S

    2003-01-01

    We apply the Generalized Uncertainty Principle (GUP) to the problem of maximum entropy and evaporation/absorption of energy of black holes near the Planck scale. We find within this general approach corrections to the maximum entropy, and indications for quenching of the evaporation because not only the evaporation term goes to a finite limit, but also because absorption of quanta seems to help the balance for black holes in a thermal bath. Then, residual masses around the Planck scale may be the final outcome of primordial black hole evaporation.

  19. Multiple Linear Regressions by Maximizing the Likelihood under Assumption of Generalized Gauss-Laplace Distribution of the Error.

    Science.gov (United States)

    Jäntschi, Lorentz; Bálint, Donatella; Bolboacă, Sorana D

    2016-01-01

    Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.

  20. Distress and avoidance in generalized anxiety disorder: exploring the relationships with intolerance of uncertainty and worry.

    Science.gov (United States)

    Lee, Jonathan K; Orsillo, Susan M; Roemer, Lizabeth; Allen, Laura B

    2010-01-01

    Theory and research suggest that treatments targeting experiential avoidance may enhance outcomes for patients with generalized anxiety disorder (GAD). The present study examined the role of experiential avoidance and distress about emotions in a treatment-seeking sample with a principal diagnosis of GAD compared with demographically matched nonanxious controls and sought to explore their shared relationship with two putative psychopathological processes in GAD: intolerance of uncertainty and worry. Patients with GAD reported significantly higher levels of experiential avoidance and distress about emotions compared with nonclinical controls while controlling for depressive symptoms, and measures of these constructs significantly predicted GAD status. Additionally, experiential avoidance and distress about anxious, positive, and angry emotions shared unique variance with intolerance of uncertainty when negative affect was partialed out, whereas only experiential avoidance and distress about anxious emotions shared unique variance with worry. Discussion focuses on implications for treatment as well as future directions for research.

  1. Classifier Design Given an Uncertainty Class of Feature Distributions via Regularized Maximum Likelihood and the Incorporation of Biological Pathway Knowledge in Steady-State Phenotype Classification.

    Science.gov (United States)

    Esfahani, Mohammad Shahrokh; Knight, Jason; Zollanvari, Amin; Yoon, Byung-Jun; Dougherty, Edward R

    2013-10-01

    Contemporary high-throughput technologies provide measurements of very large numbers of variables but often with very small sample sizes. This paper proposes an optimization-based paradigm for utilizing prior knowledge to design better performing classifiers when sample sizes are limited. We derive approximate expressions for the first and second moments of the true error rate of the proposed classifier under the assumption of two widely-used models for the uncertainty classes; ε-contamination and p-point classes. The applicability of the approximate expressions is discussed by defining the problem of finding optimal regularization parameters through minimizing the expected true error. Simulation results using the Zipf model show that the proposed paradigm yields improved classifiers that outperform traditional classifiers that use only training data. Our application of interest involves discrete gene regulatory networks possessing labeled steady-state distributions. Given prior operational knowledge of the process, our goal is to build a classifier that can accurately label future observations obtained in the steady state by utilizing both the available prior knowledge and the training data. We examine the proposed paradigm on networks containing NF-κB pathways, where it shows significant improvement in classifier performance over the classical data-only approach to classifier design. Companion website: http://gsp.tamu.edu/Publications/supplementary/shahrokh12a.

  2. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    Science.gov (United States)

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood

  3. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    Science.gov (United States)

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  4. Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    CERN Document Server

    Garrido, M C; Ruiz, A; 10.1613/jair.533

    2011-01-01

    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard prob...

  5. On the origin of generalized uncertainty principle from compactified M5-brane

    Science.gov (United States)

    Sepehri, Alireza; Pradhan, Anirudh; Beesham, A.

    2017-08-01

    In this paper, we demonstrate that compactification in M-theory can lead to a deformation of field theory consistent with the generalized uncertainty principle (GUP). We observe that the matter fields in the M3-brane action contain higher derivative terms. We demonstrate that such terms can also be constructed from a reformulation of the field theory by the GUP. In fact, we will construct the Heisenberg algebra consistent with this deformation, and explicitly demonstrate it to be the Heisenberg algebra obtained from the GUP. Thus, we use compactification in M-theory to motivate for the existence of the GUP.

  6. The Generalized Uncertainty Principle and Black Hole Entropy in Tunneling formalism

    CERN Document Server

    Majumder, Barun

    2013-01-01

    In this Letter we study the effects of the Generalized Uncertainty Principle in the tunneling formalism for Hawking radiation to evaluate the quantum-corrected Hawking temperature and entropy for a Schwarzchild black hole. We compare our results with the existing results given by other candidate theories of quantum gravity. In the entropy-area relation we found some new corection terms and in the leading order we found a term which varies as the square-root of Area. We also get the well known logarithmic correction in the sub-leading order. We discuss the significance of this new quantum corrected leading order term.

  7. On the fractional minimal length Heisenberg-Weyl uncertainty relation from fractional Riccati generalized momentum operator

    Energy Technology Data Exchange (ETDEWEB)

    Rami, El-Nabulsi Ahmad [Department of Nuclear and Energy Engineering, Cheju National University, Ara-dong 1, Jeju 690-756 (Korea, Republic of)], E-mail: nabulsiahmadrami@yahoo.fr

    2009-10-15

    It was showed that the minimal length Heisenberg-Weyl uncertainty relation may be obtained if the ordinary momentum differentiation operator is extended to its fractional counterpart, namely the generalized fractional Riccati momentum operator of order 0 < {beta} {<=} 1. Some interesting consequences are exposed in concordance with the UV/IR correspondence obtained within the framework of non-commutative C-space geometry, string theory, Rovelli loop quantum gravity, Amelino-Camelia doubly special relativity, Nottale scale relativity and El-Naschie Cantorian fractal spacetime. The fractional theory integrates an absolute minimal length and surprisingly a non-commutative position space.

  8. Entropy of Nonstatic Black Hole with the Internal Global Monopole and the Generalized Uncertainty Relation

    Institute of Scientific and Technical Information of China (English)

    HAN Yi-Wen; LIU Shou-Yu

    2005-01-01

    @@ The new equation of state density is obtained by the utilization of the generalized uncertainty relation. With the help of coordinates and the Wentzel-Kramers-Brillouin approximation, direct calculation of the scalar field entropy of the non-state black hole with an internal global monopole is performed. The entropy obtained from the calculation is proportional to the horizon area. The calculation can be free from convergence if without any cutoff, which is different from the brick-wall method. However, the pertinent result is limited.

  9. On the stability of the dark energy based on generalized uncertainty principle

    CERN Document Server

    Pasqua, Antonio; Khomenko, Iuliia

    2013-01-01

    The new agegraphic Dark Energy (NADE) model (based on generalized uncertainty principle) interacting with Dark Matter (DM) is considered in this study via power-law form of the scale factor $a(t)$. The equation of state (EoS) parameter $\\omega_{G}$ is observed to have a phantom-like behaviour. The stability of this model is investigated through the squared speed of sound $v_{s}^{2}$: it is found that $v_{s}^{2}$ always stays at negative level, which indicates instability of the considered model.

  10. Corrections to entropy and thermodynamics of charged black hole using generalized uncertainty principle

    CERN Document Server

    Tawfik, Abdel Nasser

    2015-01-01

    Recently, there has been much attention devoted to resolving the quantum corrections to the Bekenstein-Hawking (black hole) entropy, which relates the entropy to the cross-sectional area of the black hole horizon. Using generalized uncertainty principle (GUP), corrections to the geometric entropy and thermodynamics of black hole will be introduced. The impact of GUP on the entropy near the horizon of three types of black holes; Schwarzschild, Garfinkle-Horowitz-Strominger and Reissner-Nordstr\\"om is determined. It is found that the logarithmic divergence in the entropy-area relation turns to be positive. The entropy $S$, which is assumed to be related to horizon's two-dimensional area, gets an additional terms, for instance $2\\, \\sqrt{\\pi}\\, \\alpha\\, \\sqrt{S}$, where $\\alpha$ is the GUP parameter.

  11. Effect of Generalized Uncertainty Principle on Main-Sequence Stars and White Dwarfs

    CERN Document Server

    Moussa, Mohamed

    2015-01-01

    This paper addresses the effect of generalized uncertainty principle, emerged by a different approaches of quantum gravity within Planck scale, on thermodynamic properties of photon, non-relativistic ideal gases and degenerate fermions. A modification in pressure, particle number and energy density are calculated. Astrophysical objects such as main sequence stars and white dwarfs are examined and discussed as an application. A modification in Lane-Emden equation due to a change in a polytropic relation caused by the presence of quantum gravity, is investigated. The applicable range of quantum gravity parameters is estimated. The bounds in the perturbed parameters are relatively large but it may be considered reasonable values in the astrophysical regime.

  12. Massive vector particles tunneling from black holes influenced by the generalized uncertainty principle

    Science.gov (United States)

    Li, Xiang-Qian

    2016-12-01

    This study considers the generalized uncertainty principle, which incorporates the central idea of large extra dimensions, to investigate the processes involved when massive spin-1 particles tunnel from Reissner-Nordstrom and Kerr black holes under the effects of quantum gravity. For the black hole, the quantum gravity correction decelerates the increase in temperature. Up to O (1Mf/2), the corrected temperatures are affected by the mass and angular momentum of the emitted vector bosons. In addition, the temperature of the Kerr black hole becomes uneven due to rotation. When the mass of the black hole approaches the order of the higher dimensional Planck mass Mf, it stops radiating and yields a black hole remnant.

  13. Galilean and Lorentz Transformations in a Space with Generalized Uncertainty Principle

    Science.gov (United States)

    Tkachuk, V. M.

    2016-12-01

    We consider a space with Generalized Uncertainty Principle (GUP) which can be obtained in the frame of the deformed commutation relations. In the space with GUP we have found transformations relating coordinates and times of moving and rest frames of reference in the first order over the parameter of deformation. In the non-relativistic case we find the deformed Galilean transformation which is rotation in Euclidian space-time. This transformation is similar to the Lorentz one but written for Euclidean space-time where the speed of light is replaced by some velocity related to the parameter of deformation. We show that for relativistic particle in the space with GUP the coordinates of the rest and moving frames of reference satisfy the Lorentz transformation with some effective speed of light.

  14. Quantum corrections to the thermodynamics of Schwarzschild-Tangherlini black hole and the generalized uncertainty principle

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Z.W.; Zu, X.T. [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China); Li, H.L. [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China); Shenyang Normal University, College of Physics Science and Technology, Shenyang (China); Yang, S.Z. [China West Normal University, Physics and Space Science College, Nanchong (China)

    2016-04-15

    We investigate the thermodynamics of Schwarzschild-Tangherlini black hole in the context of the generalized uncertainty principle (GUP). The corrections to the Hawking temperature, entropy and the heat capacity are obtained via the modified Hamilton-Jacobi equation. These modifications show that the GUP changes the evolution of the Schwarzschild-Tangherlini black hole. Specially, the GUP effect becomes susceptible when the radius or mass of the black hole approaches the order of Planck scale, it stops radiating and leads to a black hole remnant. Meanwhile, the Planck scale remnant can be confirmed through the analysis of the heat capacity. Those phenomena imply that the GUP may give a way to solve the information paradox. Besides, we also investigate the possibilities to observe the black hole at the Large Hadron Collider (LHC), and the results demonstrate that the black hole cannot be produced in the recent LHC. (orig.)

  15. Effect of Generalized Uncertainty Principle on Main-Sequence Stars and White Dwarfs

    Directory of Open Access Journals (Sweden)

    Mohamed Moussa

    2015-01-01

    Full Text Available This paper addresses the effect of generalized uncertainty principle, emerged from different approaches of quantum gravity within Planck scale, on thermodynamic properties of photon, nonrelativistic ideal gases, and degenerate fermions. A modification in pressure, particle number, and energy density are calculated. Astrophysical objects such as main-sequence stars and white dwarfs are examined and discussed as an application. A modification in Lane-Emden equation due to a change in a polytropic relation caused by the presence of quantum gravity is investigated. The applicable range of quantum gravity parameters is estimated. The bounds in the perturbed parameters are relatively large but they may be considered reasonable values in the astrophysical regime.

  16. Corrections to the thermodynamics of Schwarzschild-Tangherlini black hole and the generalized uncertainty principle

    CERN Document Server

    Feng, Z W; Zu, X T

    2016-01-01

    We investigate the thermodynamics of Schwarzschild-Tangherlini black hole in the context of the generalized uncertainty principle. The corrections to the Hawking temperature, entropy and the heat capacity are obtained via the modified Hamilton-Jacobi equation. These modifications show that the GUP changes the evolution of Schwarzschild-Tangherlini black hole. Specially, the GUP effect becomes susceptible when the black hole evaporates down to the order of Planck scale, it makes the Hawking radiating stop and leads to remnant. It finds the endpoint of evaporation is a Planck-scale remnant with zero heat capacity. Those phenomenons imply that the GUP may give a way to solve the information. Besides, we also analysis the possibilities to find the black hole at LHC, and show that the black hole can not be produced in the recent LHC.

  17. Massive vector particles tunneling from black holes influenced by the generalized uncertainty principle

    CERN Document Server

    Li, Xiang-Qian

    2016-01-01

    Considering the generalized uncertainty principle which incorporates the central idea of Large eXtra Dimensions, the processes of massive spin-1 particles tunneling from Reissner-Nordstrom and Kerr black holes are investigated. For the black hole, the quantum gravity correction decelerates the increase of the temperature. When the mass of the black hole approaches the order of the higher dimensional Planck mass $M_f$, it stops radiating and leads to a black hole remnant. To $\\mathcal{O}(\\frac{1}{M_f^2})$, the corrected temperatures are affected by the mass and angular momentum of emitted vector bosons. Meanwhile, the temperature of the Kerr black hole becomes uneven due to the rotation.

  18. Individual uncertainty and the uncertainty of science: The impact of perceived conflict and general self-efficacy on the perception of tentativeness and credibility of scientific information

    Directory of Open Access Journals (Sweden)

    Danny eFlemming

    2015-12-01

    Full Text Available We examined in two empirical studies how situational and personal aspects of uncertainty influence laypeople’s understanding of the uncertainty of scientific information, with focus on the detection of tentativeness and perception of scientific credibility. In the first study (N = 48, we investigated the impact of a perceived conflict due to contradicting information as a situational, text-inherent aspect of uncertainty. The aim of the second study (N = 61 was to explore the role of general self-efficacy as an intra-personal uncertainty factor. In Study 1, participants read one of two versions of an introductory text in a between-group design. This text provided them with an overview about the neurosurgical procedure of deep brain stimulation (DBS. The text expressed a positive attitude toward DBS in one experimental condition or focused on the negative aspects of this method in the other condition. Then participants in both conditions read the same text that dealt with a study about DBS as experimental treatment in a small sample of patients with major depression. Perceived conflict between the two texts was found to increase the perception of tentativeness and to decrease the perception of scientific credibility, implicating that text-inherent aspects have significant effects on critical appraisal. The results of Study 2 demonstrated that participants with higher general self-efficacy detected the tentativeness to a lesser degree and assumed a higher level of scientific credibility, indicating a more naïve understanding of scientific information. This appears to be contradictory to large parts of previous findings that showed positive effects of high self-efficacy on learning. Both studies showed that perceived tentativeness and perceived scientific credibility of medical information contradicted each other. We conclude that there is a need for supporting laypeople in understanding the uncertainty of scientific information and that

  19. Individual Uncertainty and the Uncertainty of Science: The Impact of Perceived Conflict and General Self-Efficacy on the Perception of Tentativeness and Credibility of Scientific Information.

    Science.gov (United States)

    Flemming, Danny; Feinkohl, Insa; Cress, Ulrike; Kimmerle, Joachim

    2015-01-01

    We examined in two empirical studies how situational and personal aspects of uncertainty influence laypeople's understanding of the uncertainty of scientific information, with focus on the detection of tentativeness and perception of scientific credibility. In the first study (N = 48), we investigated the impact of a perceived conflict due to contradicting information as a situational, text-inherent aspect of uncertainty. The aim of the second study (N = 61) was to explore the role of general self-efficacy as an intra-personal uncertainty factor. In Study 1, participants read one of two versions of an introductory text in a between-group design. This text provided them with an overview about the neurosurgical procedure of deep brain stimulation (DBS). The text expressed a positive attitude toward DBS in one experimental condition or focused on the negative aspects of this method in the other condition. Then participants in both conditions read the same text that dealt with a study about DBS as experimental treatment in a small sample of patients with major depression. Perceived conflict between the two texts was found to increase the perception of tentativeness and to decrease the perception of scientific credibility, implicating that text-inherent aspects have significant effects on critical appraisal. The results of Study 2 demonstrated that participants with higher general self-efficacy detected the tentativeness to a lesser degree and assumed a higher level of scientific credibility, indicating a more naïve understanding of scientific information. This appears to be contradictory to large parts of previous findings that showed positive effects of high self-efficacy on learning. Both studies showed that perceived tentativeness and perceived scientific credibility of medical information contradicted each other. We conclude that there is a need for supporting laypeople in understanding the uncertainty of scientific information and that scientific writers should

  20. Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA

    Science.gov (United States)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-11-01

    This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.

  1. f(R in Holographic and Agegraphic Dark Energy Models and the Generalized Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    Barun Majumder

    2013-01-01

    Full Text Available We studied a unified approach with the holographic, new agegraphic, and f(R dark energy model to construct the form of f(R which in general is responsible for the curvature driven explanation of the very early inflation along with presently observed late time acceleration. We considered the generalized uncertainty principle in our approach which incorporated the corrections in the entropy-area relation and thereby modified the energy densities for the cosmological dark energy models considered. We found that holographic and new agegraphic f(R gravity models can behave like phantom or quintessence models in the spatially flat FRW universe. We also found a distinct term in the form of f(R which goes as R 3 / 2 due to the consideration of the GUP modified energy densities. Although the presence of this term in the action can be important in explaining the early inflationary scenario, Capozziello et al. recently showed that f(R ~ R 3 / 2 leads to an accelerated expansion, that is, a negative value for the deceleration parameter q which fits well with SNeIa and WMAP data.

  2. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  3. High-order squeezing of the quantum electromagnetic field and the generalized uncertainty relations in two-mode squeezed states

    Science.gov (United States)

    Li, Xi-Zeng; Su, Bao-Xia

    1994-01-01

    It is found that two-mode output quantum electromagnetic field in two-mode squeezed states exhibits higher-order squeezing to all even orders. And the generalized uncertainty relations are also presented for the first time. The concept of higher-order squeezing of the single-mode quantum electromagnetic field was first introduced and applied to several processes by Hong and Mandel in 1985. Lately Li Xizeng and Shan Ying have calculated the higher-order squeezing in the process of degenerate four-wave mixing and presented the higher-order uncertainty relations of the fields in single-mode squeezed states. In this paper we generalize the above work to the higher-order squeezing in two-mode squeezed states. The generalized uncertainty relations are also presented for the first time.

  4. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  5. What's in a name? Intolerance of uncertainty, other uncertainty-relevant constructs, and their differential relations to worry and generalized anxiety disorder.

    Science.gov (United States)

    Koerner, Naomi; Mejia, Teresa; Kusec, Andrea

    2017-03-01

    A number of studies have examined the association of intolerance of uncertainty (IU) to trait worry and generalized anxiety disorder (GAD). However, few studies have examined the extent of overlap between IU and other psychological constructs that bear conceptual resemblance to IU, despite the fact that IU-type constructs have been discussed and examined extensively within psychology and other disciplines. The present study investigated (1) the associations of IU, trait worry, and GAD status to a negative risk orientation, trait curiosity, indecisiveness, perceived constraints, self-oriented and socially prescribed perfectionism, intolerance of ambiguity, the need for predictability, and the need for order and structure and (2) whether IU is a unique correlate of trait worry and of the presence versus absence of Probable GAD, when overlap with other uncertainty-relevant constructs is accounted for. N = 255 adults completed self-report measures of the aforementioned constructs. Each of the constructs was significantly associated with IU. Only IU, and a subset of the other uncertainty-relevant constructs were correlated with trait worry or distinguished the Probable GAD group from the Non-GAD group. IU was the strongest unique correlate of trait worry and of the presence versus absence of Probable GAD. Indecisiveness, self-oriented perfectionism and the need for predictability were also unique correlates of trait worry or GAD status. Implications of the findings are discussed, in particular as they pertain to the definition, conceptualization, and cognitive-behavioral treatment of IU in GAD.

  6. A general method to select representative models for decision making and optimization under uncertainty

    Science.gov (United States)

    Shirangi, Mehrdad G.; Durlofsky, Louis J.

    2016-11-01

    The optimization of subsurface flow processes under geological uncertainty technically requires flow simulation to be performed over a large set of geological realizations for each function evaluation at every iteration of the optimizer. Because flow simulation over many permeability realizations (only permeability is considered to be uncertain in this study) may entail excessive computation, simulations are often performed for only a subset of 'representative' realizations. It is however challenging to identify a representative subset that provides flow statistics in close agreement with those from the full set, especially when the decision parameters (e.g., time-varying well pressures, well locations) are unknown a priori, as they are in optimization problems. In this work, we introduce a general framework, based on clustering, for selecting a representative subset of realizations for use in simulations involving 'new' sets of decision parameters. Prior to clustering, each realization is represented by a low-dimensional feature vector that contains a combination of permeability-based and flow-based quantities. Calculation of flow-based features requires the specification of a (base) flow problem and simulation over the full set of realizations. Permeability information is captured concisely through use of principal component analysis. By computing the difference between the flow response for the subset and the full set, we quantify the performance of various realization-selection methods. The impact of different weightings for flow and permeability information in the cluster-based selection procedure is assessed for a range of examples involving different types of decision parameters. These decision parameters are generated either randomly, in a manner that is consistent with the solutions proposed in global stochastic optimization procedures such as GA and PSO, or through perturbation around a base case, consistent with the solutions considered in pattern search

  7. Phylogenetic estimation with partial likelihood tensors

    CERN Document Server

    Sumner, J G

    2008-01-01

    We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.

  8. Higher-Order Squeezing of Quantum Field and the Generalized Uncertainty Relations in Non-Degenerate Four-Wave Mixing

    Science.gov (United States)

    Li, Xi-Zeng; Su, Bao-Xia

    1996-01-01

    It is found that the field of the combined mode of the probe wave and the phase-conjugate wave in the process of non-degenerate four-wave mixing exhibits higher-order squeezing to all even orders. And the generalized uncertainty relations in this process are also presented.

  9. A general equilibrium analysis of rural-urban migration under uncertainty.

    Science.gov (United States)

    Beladi, H; Ingene, C A

    1994-02-01

    "This paper analyzes the implications of an exogenous shift in relative prices for an economy that suffers from urban unemployment, as well as uncertainty in the agricultural sector. Among other things, we show that with agricultural uncertainty, an exogenous shift in relative prices will lower agricultural profit. This result is in sharp contrast with the conventional case of risk-neutrality or certainty where agricultural profit is unaffected by changes in the terms of trade." The consequences for rural-urban migration in developing countries are implied.

  10. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...

  11. A general program to compute the multivariable stability margin for systems with parametric uncertainty

    Science.gov (United States)

    Sanchez Pena, Ricardo S.; Sideris, Athanasios

    1988-01-01

    A computer program implementing an algorithm for computing the multivariable stability margin to check the robust stability of feedback systems with real parametric uncertainty is proposed. The authors present in some detail important aspects of the program. An example is presented using lateral directional control system.

  12. Imprecision and uncertainty in information representation and processing new tools based on intuitionistic fuzzy sets and generalized nets

    CERN Document Server

    Sotirov, Sotir

    2016-01-01

    The book offers a comprehensive and timely overview of advanced mathematical tools for both uncertainty analysis and modeling of parallel processes, with a special emphasis on intuitionistic fuzzy sets and generalized nets. The different chapters, written by active researchers in their respective areas, are structured to provide a coherent picture of this interdisciplinary yet still evolving field of science. They describe key tools and give practical insights into and research perspectives on the use of Atanassov's intuitionistic fuzzy sets and logic, and generalized nets for describing and dealing with uncertainty in different areas of science, technology and business, in a single, to date unique book. Here, readers find theoretical chapters, dealing with intuitionistic fuzzy operators, membership functions and algorithms, among other topics, as well as application-oriented chapters, reporting on the implementation of methods and relevant case studies in management science, the IT industry, medicine and/or ...

  13. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  14. Humor Styles and the Intolerance of Uncertainty Model of Generalized Anxiety

    Directory of Open Access Journals (Sweden)

    Nicholas A. Kuiper

    2014-08-01

    Full Text Available Past research suggests that sense of humor may play a role in anxiety. The present study builds upon this work by exploring how individual differences in various humor styles, such as affiliative, self-enhancing, and self-defeating humor, may fit within a contemporary research model of anxiety. In this model, intolerance of uncertainty is a fundamental personality characteristic that heightens excessive worry, thus increasing anxiety. We further propose that greater intolerance of uncertainty may also suppress the use of adaptive humor (affiliate and self-enhancing, and foster the increased use of maladaptive self-defeating humor. Initial correlational analyses provide empirical support for these proposals. In addition, we found that excessive worry and affiliative humor both served as significant mediators. In particular, heightened intolerance of uncertainty lead to both excessive worry and a reduction in affiliative humor use, which, in turn, increased anxiety. We also explored potential humor mediating effects for each of the individual worry content domains in this model. These analyses confirmed the importance of affiliative humor as a mediator for worry pertaining to a wide range of content domains (e.g., relationships, lack of confidence, the future and work. These findings were then discussed in terms of a combined model that considers how humor styles may impact the social sharing of positive and negative emotions.

  15. A model independent safeguard for unbinned Profile Likelihood

    CERN Document Server

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny

    2016-01-01

    We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models. We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

  16. 纵向数据单指标模型的广义经验似然统计推断%Generalized Empirical Likelihood Inference for Single-index Models

    Institute of Scientific and Technical Information of China (English)

    杨随根; 薛留根

    2015-01-01

    Based on the generalized estimation equations ( GEE ) and the quadratic inference functions ( QIF ) methods, a bias-corrected generalized empirical likelihood was proposed to make statistical inference for the single-index model with longitudinal data. The maximum empirical likelihood estimator and the bias-corrected generalized empirical log-likelihood ratio statistics for the unknown index parameter in the model were obtained. It is proved that the maximum empirical likelihood estimator is asymptotically normal and the proposed statistics are asymptotically chi-square distributed under certain conditions, and hence they can be applied to construct the confidence region of the index parameter.%基于广义估计方程和二次推断函数方法,提出了纠偏的广义经验似然方法对纵向数据单指标模型进行统计推断,获得了模型中指标参数分量的极大经验似然估计和纠偏的广义经验对数似然比统计量。证明了相关估计量在一定条件下具有渐近正态性,且纠偏的广义经验对数似然比统计量依分布收敛于χ2分布,利用所得结果,可以构造未知参数的置信域及相关的假设检验。

  17. Orbital State Uncertainty Realism

    Science.gov (United States)

    Horwood, J.; Poore, A. B.

    2012-09-01

    Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten

  18. How General are Risk Preferences? Choices under Uncertainty in Different Domains*

    Science.gov (United States)

    Einav, Liran; Finkelstein, Amy; Pascu, Iuliana; Cullen, Mark R.

    2011-01-01

    We analyze the extent to which individuals’ choices over five employer-provided insurance coverage decisions and one 401(k) investment decision exhibit systematic patterns, as would be implied by a general utility component of risk preferences. We provide evidence consistent with an important domain-general component that operates across all insurance choices. We find a considerably weaker relationship between one's insurance decisions and 401(k) asset allocation, although this relationship appears larger for more “financially sophisticated” individuals. Estimates from a stylized coverage choice model suggest that up to thirty percent of our sample makes choices that may be consistent across all six domains. PMID:24634517

  19. 纵向数据下部分非线性模型的广义经验似然推断%GENERALIZED EMPIRICAL LIKELIHOOD INFERENCE FOR PARTIALLY NONLINEAR MODELS WITH LONGITUDINAL DATA

    Institute of Scientific and Technical Information of China (English)

    肖燕婷; 孙晓青; 孙瑾

    2016-01-01

    In this paper, we study the construction of confidence region for unknown parameter in partially nonlinear models with longitudinal data. By empirical likelihood method, the generalized empirical log-likelihood ratio for parameter in nonlinear function is proposed and shown to be asymptotically chi-square distribution. At the same time, the maximum empirical likelihood estimator of the parameter in nonlinear function is obtained and asymptotic normality is proved.%本文研究了纵向数据下部分非线性模型中未知参数的置信域的构造。利用经验似然方法,构造了非线性函数中未知参数的广义对数经验似然比统计量,证明了其渐近于卡方分布。同时,得到了未知参数的最大经验似然估计,并证明了其渐近正态性。

  20. Group cognitive behavioral therapy targeting intolerance of uncertainty: a randomized trial for older Chinese adults with generalized anxiety disorder.

    Science.gov (United States)

    Hui, Chen; Zhihui, Yang

    2016-09-03

    China has entered the aging society, but the social support systems for the elderly are underdeveloped, which may make the elderly feel anxiety about their health and life quality. Given the prevalence of generalized anxiety disorder (GAD) in the elderly, it is very important to pay more attention to the treatment for old adults. Although cognitive behavioral therapy targeting intolerance of uncertainty (CBT-IU) has been applied to different groups of patients with GAD, few studies have been performed to date. In addition, the effects of CBT-IU are not well understood, especially when applied to older adults with GAD. Sixty-three Chinese older adults with a principal diagnosis of GAD were enrolled. Of these, 32 were randomized to receive group CBT-IU (intervention group) and 31 were untreated (control group). GAD and related symptoms were assessed using the Penn State Worry Questionnaire, Intolerance of Uncertainty Scale-Chinese Version, Beck Anxiety Inventory, Beck Depression Inventory, Why Worry-II scale, Cognitive Avoidance Questionnaire, Generalized Anxiety Disorder Questionnaire-IV, and Generalized Anxiety Disorder Severity Scale across the intervention. The changes between pre and after the intervention were collected, as well as the six-month follow-up. F test and repeated-measures ANOVA were conducted to analyze the data. Compared to control group, the measures' scores of experimental group decreased significantly after the intervention and six-month follow-up. Besides the main effects for time and group were significant, the interaction effect for group × time was also significant. These results indicated the improvement of the CBT-IU group and the persistence of effect after six months. Group CBT-IU is effective in Chinese older adults with GAD. The effects of CBT-IU on GAD symptoms persist for at least six months after treatment.

  1. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.;

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  2. Maximally Localized States and Quantum Corrections of Black Hole Thermodynamics in the Framework of a New Generalized Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    Yan-Gang Miao

    2015-01-01

    Full Text Available As a generalized uncertainty principle (GUP leads to the effects of the minimal length of the order of the Planck scale and UV/IR mixing, some significant physical concepts and quantities are modified or corrected correspondingly. On the one hand, we derive the maximally localized states—the physical states displaying the minimal length uncertainty associated with a new GUP proposed in our previous work. On the other hand, in the framework of this new GUP we calculate quantum corrections to the thermodynamic quantities of the Schwardzschild black hole, such as the Hawking temperature, the entropy, and the heat capacity, and give a remnant mass of the black hole at the end of the evaporation process. Moreover, we compare our results with that obtained in the frameworks of several other GUPs. In particular, we observe a significant difference between the situations with and without the consideration of the UV/IR mixing effect in the quantum corrections to the evaporation rate and the decay time. That is, the decay time can greatly be prolonged in the former case, which implies that the quantum correction from the UV/IR mixing effect may give rise to a radical rather than a tiny influence to the Hawking radiation.

  3. Supply Chains Competition under Uncertainty Concerning Player’s Strategies and Customer Choice Behavior: A Generalized Nash Game Approach

    Directory of Open Access Journals (Sweden)

    A. Hafezalkotob

    2012-01-01

    Full Text Available Decision makers in a supply chain confront two main sources of uncertainty in market environment including uncertainty about customers purchasing behaviors and rival chains strategies. Focusing on competition between two supply chains, it is considered that each customer as an independent player selects products of these chains based on random utility model. Similar to quantal response equilibrium approach, we take account of customer rationality as an exogenous parameter. Moreover, it is assumed that decision makers in a supply chain can perceive an estimation of rival strategies about price and service level formulated in the model by fuzzy strategies. In the competition model, chain’s decision makers consider a subjective probability for wining each customer which is formulated by coupled constraints. These constraints connect chains strategies regarding to each customer and yield a generalized Nash equilibrium problem. Since price cutting and increasing service level are main responses to rival supply chain, after calculating optimal strategies, we show that more efficient responses depend on customer preferences.

  4. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    Science.gov (United States)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  5. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    Science.gov (United States)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  6. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  7. Empirical Likelihood and Diagnosis of Generalized Nonlinear Regression under Data Missing%缺失数据下广义非线性回归的经验似然及诊断∗

    Institute of Scientific and Technical Information of China (English)

    牛翔宇; 冯予

    2016-01-01

    研究了数据缺失情况下广义非线性回归模型的统计诊断问题;在响应变量随机缺失的情况下,先利用经验似然方法进行参数估计,得到其渐近置信区间,并通过随机模拟比较出经验似然方法比一般方法求置信区间的优越性;对模型进行影响分析,提出经验似然距离、经验Cook距离以及标准化残差等诊断统计量,最后通过实例验证统计诊断方法的有效性和可行性。%This paper studies the diagnosis problems of generalized nonlinear regression model under data missing, under random missing of response variables, firstly uses empirical likelihood method to conduct parameter estimation, obtains its asymptotic confidence interval, then through random simulation and comparison, gets that empirical likelihood method is more superior than general methods in solving the asymptotic confidence interval, based on the analysis of the impact of the model, proposes the diagnosis statistical data such as empirical likelihood distance, empirical Cook distance, and standardized pseudo⁃residuals and finally uses examples to verify the effectiveness and feasibility of the statistical diagnosis method.

  8. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM......-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...

  9. The Change Point Identification of Poisson Process Based on the GeneralizedLikelihood Ratio%基于广义似然比的泊松过程变点识别

    Institute of Scientific and Technical Information of China (English)

    赵俊

    2012-01-01

    在广义似然比(General Likelihood Ratio.GLR)的基础上,作者提出了参数未知条件下的基于GLR的泊松(Poisson)过程变点(change point)识别模型,仿真实验给出了此模型对于变点识别的性能和可靠度,在假设过程中仅存在一个变点的条件下,可以同时得到过程受控数据集用以估计过程参数.%Based on the generalized likelihood ratio (GLR) method, a GLR-based model for identifying the change point in Poisson processes is proposed with unknown parameter. The simulation experiment gives the reliability and performance of this model for the change point identification. Under the assumption of there is only one change point in the process, the in-control dataset can be obtained and used for parameter estimation.

  10. Uncertainty Assessment in Long Term Urban Drainage Modelling

    DEFF Research Database (Denmark)

    Thorndahl, Søren

    the probability of system failures (defined as either flooding or surcharge of manholes or combined sewer overflow); (2) an application of the Generalized Likelihood Uncertainty Estimation methodology in which an event based stochastic calibration is performed; and (3) long term Monte Carlo simulations...... with the purpose of estimating the uncertainties on the extreme event statistics of maximum water levels and combined sewer overflow volumes in drainage systems. The thesis concludes that the uncertainties on both maximum water levels and combined sewer overflow volumes are considerable, especially on the large...

  11. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  13. REVISING THE INTOLERANCE OF UNCERTAINTY MODEL OF GENERALIZED ANXIETY DISORDER: EVIDENCE FROM UK AND ITALIAN UNDERGRADUATE SAMPLES

    Directory of Open Access Journals (Sweden)

    Gioia Bottesi

    2016-11-01

    Full Text Available The Intolerance of Uncertainty Model (IUM of Generalized Anxiety Disorder (GAD attributes a key role to Intolerance of Uncertainty (IU, and additional roles to Positive Beliefs about Worry (PBW, Negative Problem Orientation (NPO, and Cognitive Avoidance (CA, in the development and maintenance of worry, the core feature of GAD. Despite the role of the IUM components in worry and GAD has been considerably demonstrated, to date no studies have explicitly assessed whether and how PBW, NPO, and CA might turn IU into worry and somatic anxiety. The current studies sought to re-examine the IUM by assessing the relationships between the model’s components on two different non-clinical samples made up of UK and Italian undergraduate students. One-hundred and seventy UK undergraduates and 488 Italian undergraduates completed measures assessing IU, worry, somatic anxiety, depression, and refined measures of NPO, CA, and PBW. In each sample, two mediation models were conducted in order to test whether PBW, NPO, and CA differentially mediate the path from IU to worry and the path from IU to somatic anxiety. Secondly, it was tested whether IU also moderates the mediations. Main findings showed that, in the UK sample, only NPO mediated the path from IU to worry; as far as concern the path to anxiety, none of the putative mediators were significant. Differently, in the Italian sample PBW and NPO were mediators in the path from IU to worry, whereas only CA played a mediational role in the path from IU to somatic anxiety. Lastly, IU was observed to moderate only the association between NPO and worry, and only in the Italian sample. Some important cross-cultural, conceptual, and methodological issues raised from main results are discussed.

  14. A Generalized Perturbation Theory Solver In Rattlesnake Based On PETSc With Application To TREAT Steady State Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Schunert, Sebastian; Wang, Congjian; Wang, Yaqi; Kong, Fande; Ortensi, Javier; Baker, Benjamin; Gleicher, Frederick; DeHart, Mark; Martineau, Richard

    2017-04-01

    Rattlesnake and MAMMOTH are the designated TREAT analysis tools currently being developed at the Idaho National Laboratory. Concurrent with development of the multi-physics, multi-scale capabilities, sensitivity analysis and uncertainty quantification (SA/UQ) capabilities are required for predicitive modeling of the TREAT reactor. For steady-state SA/UQ, that is essential for setting initial conditions for the transients, generalized perturbation theory (GPT) will be used. This work describes the implementation of a PETSc based solver for the generalized adjoint equations that constitute a inhomogeneous, rank deficient problem. The standard approach is to use an outer iteration strategy with repeated removal of the fundamental mode contamination. The described GPT algorithm directly solves the GPT equations without the need of an outer iteration procedure by using Krylov subspaces that are orthogonal to the operator’s nullspace. Three test problems are solved and provide sufficient verification for the Rattlesnake’s GPT capability. We conclude with a preliminary example evaluating the impact of the Boron distribution in the TREAT reactor using perturbation theory.

  15. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  16. Lessons about likelihood functions from nuclear physics

    CERN Document Server

    Hanson, Kenneth M

    2007-01-01

    Least-squares data analysis is based on the assumption that the normal (Gaussian) distribution appropriately characterizes the likelihood, that is, the conditional probability of each measurement d, given a measured quantity y, p(d | y). On the other hand, there is ample evidence in nuclear physics of significant disagreements among measurements, which are inconsistent with the normal distribution, given their stated uncertainties. In this study the histories of 99 measurements of the lifetimes of five elementary particles are examined to determine what can be inferred about the distribution of their values relative to their stated uncertainties. Taken as a whole, the variations in the data are somewhat larger than their quoted uncertainties would indicate. These data strongly support using a Student t distribution for the likelihood function instead of a normal. The most probable value for the order of the t distribution is 2.6 +/- 0.9. It is shown that analyses based on long-tailed t-distribution likelihood...

  17. 基于广义似然比的小波域 SAR 图像相干斑抑制算法%Generalized Likelihood Ratio Based SAR Image Speckle Suppression Algorithm in Wavelet Domain

    Institute of Scientific and Technical Information of China (English)

    侯建华; 刘欣达; 陈稳; 陈少波

    2015-01-01

    A Bayes shrinkage formula is derived under the framework of joint detection and estimation theory, and a wavelet SAR image despeckling algorithm is realized based on generalized likelihood ratio.Firstly, redundant wavelet transform is performed directly to the original speckled SAR images, and binary mask is obtained for each wavelet coefficient.We use scale exponential distribution and Gamma distribution, respectively, to model the likelihood conditional probability of speckle noise and useful signal.According to the mask, the parameters of the two modes are estimated by maximum likelihood estimation method, and thus the likelihood conditional probability ratio is calculated.Experiment results show that the proposed method can effectively filter the speckle noise, and at the same time preserve the image details as possible.Satisfactory results are achieved on both synthetically speckled images and real SAR images.%在联合检测与估计理论框架下推导出了Bayes萎缩函数表达式,并提出了一种基于广义似然比的小波域SAR图像去斑算法。该算法对含斑SAR图像直接做冗余小波变换,求出小波系数所对应的二值掩模;对相干斑噪声和有用信号的似然条件概率分别建模为尺度指数分布和Gamma分布,根据二值掩模信息,采用最大似然估计得到两种模型的参数并计算似然条件概率比。实验结果表明:文中所给算法在有效滤除斑点噪声的同时,也较好地保持了图像的细节信息,在对人工加斑图像和多幅实际SAR图像的处理中获得了令人满意的结果。

  18. Weak Consistency and Convergence Rate of Quasi -Maximum Likelihood Estimated in Generalized Linear Models%广义线性模型中拟似然估计的弱相合性及收敛速度

    Institute of Scientific and Technical Information of China (English)

    邓春亮; 胡南辉

    2012-01-01

    在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性,并得到其收敛于真值βo的速度为Op(λn^-1/2),其中λn(λ^-n)为方阵Sn=n∑i=1XiX^11的最小(最大)特征值.%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els (GLMs). Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp(λn^-1/2),λn(^λn) which denotes the smallest (Maximum)eigervalue of the matrixSn =n∑i=1XiX^11,

  19. Intolerance of uncertainty as a mediator of reductions in worry in a cognitive behavioral treatment program for generalized anxiety disorder.

    Science.gov (United States)

    Bomyea, J; Ramsawh, H; Ball, T M; Taylor, C T; Paulus, M P; Lang, A J; Stein, M B

    2015-06-01

    Growing evidence suggests that intolerance of uncertainty (IU) is a cognitive vulnerability that is a central feature across diverse anxiety disorders, including generalized anxiety disorder (GAD). Although cognitive behavioral therapy (CBT) has been shown to reduce IU, it remains to be established whether or not reductions in IU mediate reductions in worry. This study examined the process of change in IU and worry in a sample of 28 individuals with GAD who completed CBT. Changes in IU and worry, assessed bi-weekly during treatment, were analyzed using multilevel mediation models. Results revealed that change in IU mediated change in worry (ab = -0.20; 95% CI [-.35, -.09]), but change in worry did not mediate change in IU (ab = -0.16; 95% CI [-.06, .12]). Findings indicated that reductions in IU accounted for 59% of the reductions in worry observed over the course of treatment, suggesting that changes in IU are not simply concomitants of changes in worry. Findings support the idea that IU is a critical construct underlying GAD.

  20. Using LIDAR and Quickbird Data to Model Plant Production and Quantify Uncertainties Associated with Wetland Detection and Land Cover Generalizations

    Science.gov (United States)

    Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  1. Using LIDAR and Quickbird Data to Model Plant Production and Quantify Uncertainties Associated with Wetland Detection and Land Cover Generalizations

    Science.gov (United States)

    Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  2. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  3. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  4. Removal of Asperger's syndrome from the DSM V: community response to uncertainty.

    Science.gov (United States)

    Parsloe, Sarah M; Babrow, Austin S

    2016-01-01

    The May 2013 release of the new version of the Diagnostic and Statistical Manual of Mental Disorders (DSM V) subsumed Asperger's syndrome under the wider diagnostic label of autism spectrum disorder (ASD). The revision has created much uncertainty in the community affected by this condition. This study uses problematic integration theory and thematic analysis to investigate how participants in Wrong Planet, a large online community associated with autism and Asperger's syndrome, have constructed these uncertainties. The analysis illuminates uncertainties concerning both the likelihood of diagnosis and value of diagnosis, and it details specific issues within these two general areas of uncertainty. The article concludes with both conceptual and practical implications.

  5. Event based uncertainty assessment in urban drainage modelling, applying the GLUE methodology

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Beven, K.J.; Jensen, Jacob Birk

    2008-01-01

    In the present paper an uncertainty analysis on an application of the commercial urban drainage model MOUSE is conducted. Applying the Generalized Likelihood Uncertainty Estimation (GLUE) methodology the model is conditioned on observation time series from two flow gauges as well as the occurrence...... if the uncertainty analysis is unambiguous. It is shown that the GLUE methodology is very applicable in uncertainty analysis of this application of an urban drainage model, although it was shown to be quite difficult of get good fits of the whole time series....

  6. Uncertainty vs. Information (Invited)

    Science.gov (United States)

    Nearing, Grey

    2017-04-01

    Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.

  7. Sensitivity of Subjective Decisions in the GLUE Methodology for Quantifying the Uncertainty in the Flood Inundation Map for Seymour Reach in Indiana, USA

    Directory of Open Access Journals (Sweden)

    Younghun Jung

    2014-07-01

    Full Text Available Generalized likelihood uncertainty estimation (GLUE is one of the widely-used methods for quantifying uncertainty in flood inundation mapping. However, the subjective nature of its application involving the definition of the likelihood measure and the criteria for defining acceptable versus unacceptable models can lead to different results in quantifying uncertainty bounds. The objective of this paper is to perform a sensitivity analysis of the effect of the choice of likelihood measures and cut-off thresholds used in selecting behavioral and non-behavioral models in the GLUE methodology. By using a dataset for a reach along the White River in Seymour, Indiana, multiple prior distributions, likelihood measures and cut-off thresholds are used to investigate the role of subjective decisions in applying the GLUE methodology for uncertainty quantification related to topography, streamflow and Manning’s n. Results from this study show that a normal pdf produces a narrower uncertainty bound compared to a uniform pdf for an uncertain variable. Similarly, a likelihood measure based on water surface elevations is found to be less affected compared to other likelihood measures that are based on flood inundation area and width. Although the findings from this study are limited due to the use of a single test case, this paper provides a framework that can be utilized to gain a better understanding of the uncertainty while applying the GLUE methodology in flood inundation mapping.

  8. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using markov chain Monte Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference are compared...

  9. Augmented Likelihood Image Reconstruction.

    Science.gov (United States)

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  10. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum like...

  11. Likelihood based testing for no fractional cointegration

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case. The s...

  12. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  13. 广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    张戈; 吴黎军

    2013-01-01

    研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.

  14. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    Science.gov (United States)

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.

  15. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    Science.gov (United States)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  16. Uncertainty, learning, and the optimal management of wildlife

    Science.gov (United States)

    Williams, B.K.

    2001-01-01

    Wildlife management is limited by uncontrolled and often unrecognized environmental variation, by limited capabilities to observe and control animal populations, and by a lack of understanding about the biological processes driving population dynamics. In this paper I describe a comprehensive framework for management that includes multiple models and likelihood values to account for structural uncertainty, along with stochastic factors to account for environmental variation, random sampling, and partial controllability. Adaptive optimization is developed in terms of the optimal control of incompletely understood populations, with the expected value of perfect information measuring the potential for improving control through learning. The framework for optimal adaptive control is generalized by including partial observability and non-adaptive, sample-based updating of model likelihoods. Passive adaptive management is derived as a special case of constrained adaptive optimization, representing a potentially efficient suboptimal alternative that nonetheless accounts for structural uncertainty.

  17. Adaptive Beam Space Transformation Generalized Likelihood Ratio Test Algorithm Using Acoustic Vector Sensor Array%声矢量阵自适应波束域广义似然比检测算法

    Institute of Scientific and Technical Information of China (English)

    梁国龙; 陶凯; 范展

    2015-01-01

    In order to resolve the detection problem of passive remote weak targets under the background of strong interfer-ence ,a detection algorithm based on adaptive beam space transformation using acoustic vector sensor array is proposed .Firstly ,by designing a beamspace matrix which covers the observed sector and rejects the interference signals out-of-sector ,the array output da-ta are transformed to beamspace .Then ,the generalized likelihood ratio test is derived in beamspace .The simulation results show that the method can detect the passive weak targets efficiently under the background of strong interference ,and provide the constant false alarm rate (CFAR ) detection .%为了解决水下强干扰背景下的远程弱目标被动探测问题,基于声矢量阵,本文提出了一种自适应波束域的检测算法。该算法首先对阵列接收数据进行波束域变换,令通带覆盖整个观测扇面,并自适应地抑制扇面外的强干扰信号;然后在波束域进行广义似然比检测。仿真结果表明,该算法能在强干扰背景下实现对远程弱目标的检测,并且具有恒虚警率特性。

  18. 基于广义似然比法的化工非线性动态过程过失误差侦破%Gross Errors Detection for Nonlinear Dynamic Chemical Process Based on Generalized Likelihood Ratios

    Institute of Scientific and Technical Information of China (English)

    王莉; 金思毅; 黄兆杰

    2013-01-01

    广义似然比法(GLR)是一种有效适用于线性稳态化工过程的过失误差侦破方法.通过将动态化工数据协调模型中的微分约束和代数约束转化为矩阵形式和非线性约束线性化方法,成功将GLR应用到连续搅拌釜(CSTR)非线性动态系统中,同时计算了GLR在该系统中的过失误差侦破性能.统计结果表明,GLR的过失误差侦破率与过失误差大小和窗口长度有关:侦破率随过失误差增大而增大,随窗口长度增大而增大.%Generalized likelihood ratios (GLR) is an effective gross errors detection method for linear steady data reconciliation.In the paper,the differential constraints and algebraic constraints of dynamic data reconciliation model were transformed into the form of matrix,and the nonlinear constraints were linearized.Based on the two methods,GLR was successfully applied to a continuous stirred tank reactor (CSTR) system.The performance of gross errors detection of GLR in the nonlinear dynamic system was also calculated.Statistic results show that gross error detection rate relates to the size of gross error and the length of moving window.With the increase of gross error,the detection rate is improved; with the increase of length of moving window,the detection rate is also improved.

  19. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  20. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  1. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  2. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    Science.gov (United States)

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  3. Mlpnp - a Real-Time Maximum Likelihood Solution to the Perspective-N Problem

    Science.gov (United States)

    Urban, S.; Leitloff, J.; Hinz, S.

    2016-06-01

    In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space.

  4. The critical role of uncertainty in projections of hydrological extremes

    Science.gov (United States)

    Meresa, Hadush K.; Romanowicz, Renata J.

    2017-08-01

    This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.

  5. Sustainability and uncertainty

    DEFF Research Database (Denmark)

    Jensen, Karsten Klint

    2007-01-01

    and infers prescriptions from this requirement. These two approaches may conflict, and in this conflict the top-down approach has the upper hand, ethically speaking. However, the implicit goal in the top-down approach of justice between generations needs to be refined in several dimensions. But even given...... a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable...... are decisions under uncertainty. There might be different judgments on likelihoods; but even given some set of probabilities, there might be disagreement on the right level of precaution in face of the uncertainty....

  6. Regions of constrained maximum likelihood parameter identifiability

    Science.gov (United States)

    Lee, C.-H.; Herget, C. J.

    1975-01-01

    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  7. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  8. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...

  9. 模型不确定性条件下的一般均衡定价%General equilibrium asset pricing under model uncertainty

    Institute of Scientific and Technical Information of China (English)

    李仲飞; 高金窑

    2011-01-01

    By introducing discounted entropy into the CIR model, this article investigates general equilibrium asset pricing under model uncertainty. A risk-free rate pricing equation, an intertemporal capital asset pricing model, a consumption-based capital asset pricing model, a financial asset pricing formula and a stochastic discounted factor are derived under model uncertainty. It is found that model uncertainty aversion decreases the equilibrium risk-free rate while increases the equity premium, and hence the new asset pricing model can explain the risk-free rate puzzle and equity premium puzzle simultaneously.%在CIR模型基础上,通过引入折现熵,研究了模型不确定性条件下的一般均衡定价问题;并导出了模型不确定性条件下的无风险利率定价方程、跨期资本资产定价模型、基于消费的资本资产定价模型、金融资产定价公式及包含不确定性成分的随机折现因子.研究发现,随着投资者的不确定性规避偏好的提高,均衡时的无风险利率随之降低,风险资产的溢价水平却随之提高,因此文章结论可以同时解释无风险利率之谜与风险溢价之谜.

  10. Uncertainty and Climate Change and its effect on Generalization and Prediction abilities by creating Diverse Classifiers and Feature Section Methods using Information Fusion

    Directory of Open Access Journals (Sweden)

    Y. P. Kosta

    2010-11-01

    Full Text Available The model forecast suggests a deterministic approach. Forecasting was traditionally done by a singlemodel - deterministic prediction, recent years has witnessed drastic changes. Today, with InformationFusion (Ensemble technique it is possible to improve the generalization ability of classifiers with highlevels of reliability. Through Information Fusion it is easily possible to combine diverse & independentoutcomes for decision-making. This approach adopts the idea of combining the results of multiplemethods (two-way interactions between them using appropriate model on the testset. Althoughuncertainties are often very significant, for the purpose of single prediction, especially at the initialstage, one dose not consider uncertainties in the model, the initial conditions, or the very nature of theclimate (environment or atmosphere itself using single model. If we make small changes in the initialparameter setting, it will result in change in predictive accuracy of the model. Similarly, uncertainty inmodel physics can result in large forecast differences and errors. So, instead of running one prediction,run a collection/package/bundle (ensemble of predictions, each one kick starting from a different initialstate or with different conditions and sequentially executing the next. The variations resulting due toexecution of different prediction package/model could be then used (independently combining oraggregating to estimate the uncertainty of the prediction, giving us better accuracy and reliability. Inthis paper the authors propose to use Information fusion technique that will provide insight of probablekey parameters that is necessary to purposefully evaluate the successes of new generation of productsand services, improving forecasting. Ensembles can be creatively applied to provide insight against thenew generation products yielding higher probabilities of success. Ensemble will yield critical features ofthe products and also provide insight to

  11. Measurement uncertainty.

    Science.gov (United States)

    Bartley, David; Lidén, Göran

    2008-08-01

    The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.

  12. An Algorithm for Detecting the Onset of Muscle Contraction Based on Generalized Likelihood Ratio Test%采用广义似然比检测的肌肉收缩起始时刻判断算法

    Institute of Scientific and Technical Information of China (English)

    徐琦; 程俊银; 周慧; 杨磊

    2012-01-01

    The surface electromyography (sEMG) of stump in the amputee is often applied to control the action of myoelectric prosthesis. According to the sEMG signals with low Signal to Noise Ratio (SNR) recorded from the stump muscle,a generalized likelihood ratio (GLR) method was proposed to detect the onset of muscle contraction,where a decision threshold was related with the SNR of sEMG signals,an off-line simulation method was used to determine the relationship between them. For the simulated sEMG signals with a given SNR,the different thresholds were tested,the optimal threshold could be obtained when the detection accuracy was optimized. As a result,the fitted curve was achieved to describe the relationship of the SNR and the decision threshold. Then,the sEMG signals are analyzed on-line by the GLR test for the onset detection of muscle contractions,while the decision threshold corresponding with the SNR was chosen based on the fitted curve. Compared with the classical algorithms,with the simulated sEMG traces,the error mean and standard deviation for estimating the muscle contraction onset were reduced at least 35% and 43% respectively; based on the real EMG signals,the error mean and standard deviation of the onset estimate were separately not less than 29% and 23%. Therefore,the proposed algorithm based on GLR test for the onset detection of muscle contraction was more accurate than other methods,while the SNR of sEMG signals was low.%肌电假肢利用残肢残存肌肉的肌电信号实行对假肢的控制.对于低信噪比的残肢表面肌电,本研究采用广义似然比检测方法判断肌肉收缩起始时刻,其中判别阈值与肌电信号信噪比有关.针对不同信噪比的模拟肌电信号,采用离线仿真方法得到肌肉收缩起始时刻检测误差最小的判别阈值,得到信噪比-经验阈值拟合曲线,确定信噪比与阈值的对应关系;根据肌电信噪比由阈值拟合曲线得到判别阈值,采用似然比检测算法

  13. Groups, information theory, and Einstein's likelihood principle

    Science.gov (United States)

    Sicuro, Gabriele; Tempesta, Piergiulio

    2016-04-01

    We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.

  14. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...... the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...... of the prediction bands can be questioned, due to the subjective nature of the method. Moreover, the method also gives very useful information about the model and parameter behaviour....

  15. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  16. Understanding uncertainty

    CERN Document Server

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  17. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... in an example concerning minke whales in the North Atlantic. Our modelling and computational approach is flexible but demanding in terms of computing time....

  18. Introduction to general and generalized linear models

    CERN Document Server

    Madsen, Henrik

    2010-01-01

    IntroductionExamples of types of data Motivating examples A first view on the modelsThe Likelihood PrincipleIntroduction Point estimation theory The likelihood function The score function The information matrix Alternative parameterizations of the likelihood The maximum likelihood estimate (MLE) Distribution of the ML estimator Generalized loss-function and deviance Quadratic approximation of the log-likelihood Likelihood ratio tests Successive testing in hypothesis chains Dealing with nuisance parameters General Linear ModelsIntroduction The multivariate normal distribution General linear mod

  19. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  20. Rate of strong consistency of the maximum quasi-likelihood estimator in quasi-likelihood nonlinear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.

  1. Committee of machine learning predictors of hydrological models uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri

    2014-05-01

    In prediction of uncertainty based on machine learning methods, the results of various sampling schemes namely, Monte Carlo sampling (MCS), generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), particle swarm optimization (PSO) and adaptive cluster covering (ACCO)[1] used to build a predictive models. These models predict the uncertainty (quantiles of pdf) of a deterministic output from hydrological model [2]. Inputs to these models are the specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. For each sampling scheme three machine learning methods namely, artificial neural networks, model tree, locally weighted regression are applied to predict output uncertainties. The problem here is that different sampling algorithms result in different data sets used to train different machine learning models which leads to several models (21 predictive uncertainty models). There is no clear evidence which model is the best since there is no basis for comparison. A solution could be to form a committee of all models and to sue a dynamic averaging scheme to generate the final output [3]. This approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model HBV in the Nzoia catchment in Kenya. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press

  2. Parameter uncertainty analysis for simulating streamflow in a river catchment of Vietnam

    Directory of Open Access Journals (Sweden)

    Dao Nguyen Khoi

    2015-07-01

    Full Text Available Hydrological models play vital roles in management of water resources. However, the calibration of the hydrological models is a large challenge because of the uncertainty involved in the large number of parameters. In this study, four uncertainty analysis methods, including Generalized Likelihood Uncertainty Estimation (GLUE, Parameter Solution (ParaSol, Particle Swarm Optimization (PSO, and Sequential Uncertainty Fitting (SUFI-2, were employed to perform parameter uncertainty analysis of streamflow simulation in the Srepok River Catchment by using the Soil and Water Assessment Tool (SWAT model. The four methods were compared in terms of the model prediction uncertainty, the model performance, and the computational efficiency. The results showed that the SUFI-2 method has the advantages in the model calibration and uncertainty analysis. This technique could be run with the smallest of simulation runs to achieve good prediction uncertainty bands and model performance. This technique could be run with the smallest of simulation runs to achieve good prediction uncertainty bands and model performance.

  3. Pauli effects in uncertainty relations

    CERN Document Server

    Toranzo, I V; Esquivel, R O; Dehesa, J S

    2014-01-01

    In this letter we analyze the effect of the spin dimensionality of a physical system in two mathematical formulations of the uncertainty principle: a generalized Heisenberg uncertainty relation valid for all antisymmetric N-fermion wavefunctions, and the Fisher-information- based uncertainty relation valid for all antisymmetric N-fermion wavefunctions of central potentials. The accuracy of these spin-modified uncertainty relations is examined for all atoms from Hydrogen to Lawrencium in a self-consistent framework.

  4. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  5. On approximate equivalence of the generalized least squares estimate and the maximum likelihood estimate in a growth curve model%生长曲线模型中最小二乘估计与极大似然估计的近似等价性

    Institute of Scientific and Technical Information of China (English)

    王理同

    2012-01-01

    在生长曲线模型中,参数矩阵的最小二乘估计为响应变量的线性函数,而极大似然估计为响应变量的非线性函数,所以极大似然估计的统计推断比较复杂.为了使它的统计推断简单点,一些学者考虑了极大似然估计与最小二乘估计的等价性.不幸的是极大似然估计与最小二乘估计的完全等价性不易满足.因此考虑它们的近似等价性,即考虑它们基于欧式范数标准下的模长之比.如果比值在任意给定的允许误差之内,就认为极大似然估计近似等价于最小二乘估计,从而简化极大似然估计的统计推断.%In a growth curve model,the generalized least squares estimator of the parameter matrix is a linear function of the response variables while its maximum likelihood estimator is nonlinear, so the statistical inference based on the maximum likelihood estimate might be more complicated. In order to make its statistical inference more easily analytical and tractable to obtain, some authors concern conditions under which the maximum likelihood estimator is completely equivalent to the generalized least squares estimator. Unfortunately, such conditions are very parsimonious. Therefore, an asymptotical equivalence between them is suggested, that is, consider the ratio of two covariance matrices concerned based on Euclidean norm. It is believed that the maximum likelihood estimator approximates the generalized least squares estimator if the ratio between them is limited to the permitted errors, and then the statistical inference of the maximum likelihood estimator is simplified.

  6. Uncertainty in hydrological signatures

    Science.gov (United States)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  7. Taming outliers in pulsar-timing datasets with hierarchical likelihoods and Hamiltonian sampling

    CERN Document Server

    Vallisneri, Michele

    2016-01-01

    Pulsar-timing datasets have been analyzed with great success using probabilistic treatments based on Gaussian distributions, with applications ranging from studies of neutron-star structure to tests of general relativity and searches for nanosecond gravitational waves. As for other applications of Gaussian distributions, outliers in timing measurements pose a significant challenge to statistical inference, since they can bias the estimation of timing and noise parameters, and affect reported parameter uncertainties. We describe and demonstrate a practical end-to-end approach to perform Bayesian inference of timing and noise parameters robustly in the presence of outliers, and to identify these probabilistically. The method is fully consistent (i.e., outlier-ness probabilities vary in tune with the posterior distributions of the timing and noise parameters), and it relies on the efficient sampling of the hierarchical form of the pulsar-timing likelihood. Such sampling has recently become possible with a "no-U-...

  8. Section 9: Ground Water - Likelihood of Release

    Science.gov (United States)

    HRS training. the ground water pathway likelihood of release factor category reflects the likelihood that there has been, or will be, a release of hazardous substances in any of the aquifers underlying the site.

  9. Maximum likelihood estimation for life distributions with competing failure modes

    Science.gov (United States)

    Sidik, S. M.

    1979-01-01

    The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.

  10. Workshop on Likelihoods for the LHC Searches

    CERN Document Server

    2013-01-01

    The primary goal of this 3‐day workshop is to educate the LHC community about the scientific utility of likelihoods. We shall do so by describing and discussing several real‐world examples of the use of likelihoods, including a one‐day in‐depth examination of likelihoods in the Higgs boson studies by ATLAS and CMS.

  11. Using LiDAR and quickbird data to model plant production and quantify uncertainties associated with wetland detection and land cover generalizations

    Science.gov (United States)

    Cook, B.D.; Bolstad, P.V.; Naesset, E.; Anderson, R. Scott; Garrigues, S.; Morisette, J.T.; Nickeson, J.; Davis, K.J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30??m to 1??km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600??ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400??m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and

  12. Parametric likelihood inference for interval censored competing risks data.

    Science.gov (United States)

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  13. Uncertainty theory

    CERN Document Server

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  14. Planck 2013 results. XV. CMB power spectra and likelihood

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...

  15. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  16. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  17. Exclusion probabilities and likelihood ratios with applications to kinship problems.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2014-05-01

    In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.

  18. Insurance Applications of Active Fault Maps Showing Epistemic Uncertainty

    Science.gov (United States)

    Woo, G.

    2005-12-01

    Insurance loss modeling for earthquakes utilizes available maps of active faulting produced by geoscientists. All such maps are subject to uncertainty, arising from lack of knowledge of fault geometry and rupture history. Field work to undertake geological fault investigations drains human and monetary resources, and this inevitably limits the resolution of fault parameters. Some areas are more accessible than others; some may be of greater social or economic importance than others; some areas may be investigated more rapidly or diligently than others; or funding restrictions may have curtailed the extent of the fault mapping program. In contrast with the aleatory uncertainty associated with the inherent variability in the dynamics of earthquake fault rupture, uncertainty associated with lack of knowledge of fault geometry and rupture history is epistemic. The extent of this epistemic uncertainty may vary substantially from one regional or national fault map to another. However aware the local cartographer may be, this uncertainty is generally not conveyed in detail to the international map user. For example, an area may be left blank for a variety of reasons, ranging from lack of sufficient investigation of a fault to lack of convincing evidence of activity. Epistemic uncertainty in fault parameters is of concern in any probabilistic assessment of seismic hazard, not least in insurance earthquake risk applications. A logic-tree framework is appropriate for incorporating epistemic uncertainty. Some insurance contracts cover specific high-value properties or transport infrastructure, and therefore are extremely sensitive to the geometry of active faulting. Alternative Risk Transfer (ART) to the capital markets may also be considered. In order for such insurance or ART contracts to be properly priced, uncertainty should be taken into account. Accordingly, an estimate is needed for the likelihood of surface rupture capable of causing severe damage. Especially where a

  19. Taming outliers in pulsar-timing datasets with hierarchical likelihoods and Hamiltonian sampling

    Science.gov (United States)

    Vallisneri, Michele; van Haasteren, Rutger

    2017-01-01

    Pulsar-timing datasets have been analyzed with great success using probabilistic treatments based on Gaussian distributions, with applications ranging from studies of neutron-star structure to tests of general relativity and searches for nanosecond gravitational waves. As for other applications of Gaussian distributions, outliers in timing measurements pose a significant challenge to statistical inference, since they can bias the estimation of timing and noise parameters, and affect reported parameter uncertainties. We describe and demonstrate a practical end-to-end approach to perform Bayesian inference of timing and noise parameters robustly in the presence of outliers, and to identify these probabilistically. The method is fully consistent (i.e., outlier-ness probabilities vary in tune with the posterior distributions of the timing and noise parameters), and it relies on the efficient sampling of the hierarchical form of the pulsar-timing likelihood. Such sampling has recently become possible with a "no-U-turn" Hamiltonian sampler coupled to a highly customized reparametrization of the likelihood; this code is described elsewhere, but it is already available online. We recommend our method as a standard step in the preparation of pulsar-timing-array datasets: even if statistical inference is not affected, follow-up studies of outlier candidates can reveal unseen problems in radio observations and timing measurements; furthermore, confidence in the results of gravitational-wave searches will only benefit from stringent statistical evidence that datasets are clean and outlier-free.

  20. Taming outliers in pulsar-timing data sets with hierarchical likelihoods and Hamiltonian sampling

    Science.gov (United States)

    Vallisneri, Michele; van Haasteren, Rutger

    2017-04-01

    Pulsar-timing data sets have been analysed with great success using probabilistic treatments based on Gaussian distributions, with applications ranging from studies of neutron-star structure to tests of general relativity and searches for nanosecond gravitational waves. As for other applications of Gaussian distributions, outliers in timing measurements pose a significant challenge to statistical inference, since they can bias the estimation of timing and noise parameters, and affect reported parameter uncertainties. We describe and demonstrate a practical end-to-end approach to perform Bayesian inference of timing and noise parameters robustly in the presence of outliers, and to identify these probabilistically. The method is fully consistent (i.e. outlier-ness probabilities vary in tune with the posterior distributions of the timing and noise parameters), and it relies on the efficient sampling of the hierarchical form of the pulsar-timing likelihood. Such sampling has recently become possible with a 'no-U-turn' Hamiltonian sampler coupled to a highly customized reparametrization of the likelihood; this code is described elsewhere, but it is already available online. We recommend our method as a standard step in the preparation of pulsar-timing-array data sets: even if statistical inference is not affected, follow-up studies of outlier candidates can reveal unseen problems in radio observations and timing measurements; furthermore, confidence in the results of gravitational-wave searches will only benefit from stringent statistical evidence that data sets are clean and outlier-free.

  1. Model Uncertainty for Bilinear Hysteric Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    In structural reliability analysis at least three types of uncertainty must be considered, namely physical uncertainty, statistical uncertainty, and model uncertainty (see e.g. Thoft-Christensen & Baker [1]). The physical uncertainty is usually modelled by a number of basic variables by predictive...... density functions, Veneziano [2]. In general, model uncertainty is the uncertainty connected with mathematical modelling of the physical reality. When structural reliability analysis is related to the concept of a failure surface (or limit state surface) in the n-dimension basic variable space then model...... uncertainty is at least due to the neglected variables, the modelling of the failure surface and the computational technique used....

  2. Evaluating the use of verbal probability expressions to communicate likelihood information in IPCC reports

    Science.gov (United States)

    Harris, Adam

    2014-05-01

    The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.

  3. Heteroscedastic one-factor models and marginal maximum likelihood estimation

    NARCIS (Netherlands)

    Hessen, D.J.; Dolan, C.V.

    2009-01-01

    In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati

  4. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    Science.gov (United States)

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  5. Maximum-likelihood analysis of the COBE angular correlation function

    Science.gov (United States)

    Seljak, Uros; Bertschinger, Edmund

    1993-01-01

    We have used maximum-likelihood estimation to determine the quadrupole amplitude Q(sub rms-PS) and the spectral index n of the density fluctuation power spectrum at recombination from the COBE DMR data. We find a strong correlation between the two parameters of the form Q(sub rms-PS) = (15.7 +/- 2.6) exp (0.46(1 - n)) microK for fixed n. Our result is slightly smaller than and has a smaller statistical uncertainty than the 1992 estimate of Smoot et al.

  6. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    CERN Document Server

    Chiappo, A; Conrad, J; Strigari, L E; Anderson, B; Sanchez-Conde, M A

    2016-01-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter annihilation signals. The J-factors of dark matter-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kine...

  7. Dwarf spheroidal J-factors without priors: A likelihood-based analysis for indirect dark matter searches

    Science.gov (United States)

    Chiappo, A.; Cohen-Tanugi, J.; Conrad, J.; Strigari, L. E.; Anderson, B.; Sánchez-Conde, M. A.

    2017-04-01

    Line-of-sight integrals of the squared density, commonly called the J-factor, are essential for inferring dark matter (DM) annihilation signals. The J-factors of DM-dominated dwarf spheroidal satellite galaxies (dSphs) have typically been derived using Bayesian techniques, which for small data samples implies that a choice of priors constitutes a non-negligible systematic uncertainty. Here we report the development of a new fully frequentist approach to construct the profile likelihood of the J-factor. Using stellar kinematic data from several classical and ultra-faint dSphs, we derive the maximum likelihood value for the J-factor and its confidence intervals. We validate this method, in particular its bias and coverage, using simulated data from the Gaia Challenge. We find that the method possesses good statistical properties. The J-factors and their uncertainties are generally in good agreement with the Bayesian-derived values, with the largest deviations restricted to the systems with the smallest kinematic data sets. We discuss improvements, extensions, and future applications of this technique.

  8. Assessment of conceptual model uncertainty for the regional aquifer Pampa del Tamarugal – North Chile

    Directory of Open Access Journals (Sweden)

    R. Rojas

    2009-09-01

    Full Text Available In this work we assess the uncertainty in modelling the groundwater flow for the Pampa del Tamarugal Aquifer (PTA – North Chile using a novel and fully integrated multi-model approach aimed at explicitly accounting for uncertainties arising from the definition of alternative conceptual models. The approach integrates the Generalized Likelihood Uncertainty Estimation (GLUE and Bayesian Model Averaging (BMA methods. For each member of an ensemble M of potential conceptualizations, model weights used in BMA for multi-model aggregation are obtained from GLUE-based likelihood values. These model weights are based on model performance, thus, reflecting how well a conceptualization reproduces an observed dataset D. GLUE-based cumulative predictive distributions for each member of M are then aggregated obtaining predictive distributions accounting for conceptual model uncertainties. For the PTA we propose an ensemble of eight alternative conceptualizations covering all major features of groundwater flow models independently developed in past studies and including two recharge mechanisms which have been source of debate for several years. Results showed that accounting for heterogeneities in the hydraulic conductivity field (a reduced the uncertainty in the estimations of parameters and state variables, and (b increased the corresponding model weights used for multi-model aggregation. This was more noticeable when the hydraulic conductivity field was conditioned on available hydraulic conductivity measurements. Contribution of conceptual model uncertainty to the predictive uncertainty varied between 6% and 64% for ground water head estimations and between 16% and 79% for ground water flow estimations. These results clearly illustrate the relevance of conceptual model uncertainty.

  9. Assessment of conceptual model uncertainty for the regional aquifer Pampa del Tamarugal – North Chile

    Directory of Open Access Journals (Sweden)

    R. Rojas

    2010-02-01

    Full Text Available In this work we assess the uncertainty in modelling the groundwater flow for the Pampa del Tamarugal Aquifer (PTA – North Chile using a novel and fully integrated multi-model approach aimed at explicitly accounting for uncertainties arising from the definition of alternative conceptual models. The approach integrates the Generalized Likelihood Uncertainty Estimation (GLUE and Bayesian Model Averaging (BMA methods. For each member of an ensemble M of potential conceptualizations, model weights used in BMA for multi-model aggregation are obtained from GLUE-based likelihood values. These model weights are based on model performance, thus, reflecting how well a conceptualization reproduces an observed dataset D. GLUE-based cumulative predictive distributions for each member of M are then aggregated obtaining predictive distributions accounting for conceptual model uncertainties. For the PTA we propose an ensemble of eight alternative conceptualizations covering all major features of groundwater flow models independently developed in past studies and including two recharge mechanisms which have been source of debate for several years. Results showed that accounting for heterogeneities in the hydraulic conductivity field (a reduced the uncertainty in the estimations of parameters and state variables, and (b increased the corresponding model weights used for multi-model aggregation. This was more noticeable when the hydraulic conductivity field was conditioned on available hydraulic conductivity measurements. Contribution of conceptual model uncertainty to the predictive uncertainty varied between 6% and 64% for ground water head estimations and between 16% and 79% for ground water flow estimations. These results clearly illustrate the relevance of conceptual model uncertainty.

  10. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  11. Measurement uncertainty relations

    Energy Technology Data Exchange (ETDEWEB)

    Busch, Paul, E-mail: paul.busch@york.ac.uk [Department of Mathematics, University of York, York (United Kingdom); Lahti, Pekka, E-mail: pekka.lahti@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Werner, Reinhard F., E-mail: reinhard.werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität, Hannover (Germany)

    2014-04-15

    Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.

  12. Approximated maximum likelihood estimation in multifractal random walks

    CERN Document Server

    Løvsletten, Ola

    2011-01-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry et al., Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  13. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  14. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  15. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  16. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration unce...

  17. Influence functions of trimmed likelihood estimators for lifetime experiments

    OpenAIRE

    2015-01-01

    We provide a general approach for deriving the influence function for trimmed likelihood estimators using the implicit function theorem. The approach is applied to lifetime models with exponential or lognormal distributions possessing a linear or nonlinear link function. A side result is that the functional form of the trimmed estimator for location and linear regression used by Bednarski and Clarke (1993, 2002) and Bednarski et al. (2010) is not generally always the correct fu...

  18. Vehicle Routing under Uncertainty

    NARCIS (Netherlands)

    Máhr, T.

    2011-01-01

    In this thesis, the main focus is on the study of a real-world transportation problem with uncertainties, and on the comparison of a centralized and a distributed solution approach in the context of this problem. We formalize the real-world problem, and provide a general framework to extend it with

  19. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    Science.gov (United States)

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.

  20. Conditional Likelihood Estimators for Hidden Markov Models and Stochastic Volatility Models

    OpenAIRE

    Genon-Catalot, Valentine; Jeantheau, Thierry; Laredo, Catherine

    2003-01-01

    ABSTRACT. This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space. This contrast is based on the conditional likelihood approach, often used for ARCH-type models. We prove the strong consistency of the conditional likelihood estimators under appropriate conditions. The method is applied to the Kalman filter (for which this contrast and the exact likelihood lead to asymptotically equivalent estimat...

  1. Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2010-01-01

    The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood......: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training...

  2. Application Of Global Sensitivity Analysis And Uncertainty Quantification In Dynamic Modelling Of Micropollutants In Stormwater Runoff

    DEFF Research Database (Denmark)

    Vezzaro, Luca; Mikkelsen, Peter Steen

    2012-01-01

    . The analysis is based on the combination of variance-decomposition Global Sensitivity Analysis (GSA) with the Generalized Likelihood Uncertainty Estimation (GLUE) technique. The GSA-GLUE approach highlights the correlation between the model factors defining the mass of pollutant in the system......The need for estimating micropollutants fluxes in stormwater systems increases the role of stormwater quality models as support for urban water managers, although the application of such models is affected by high uncertainty. This study presents a procedure for identifying the major sources...... of uncertainty in a conceptual lumped dynamic stormwater runoff quality model that is used in a study catchment to estimate (i) copper loads, (ii) compliance with dissolved Cu concentration limits on stormwater discharge and (iii) the fraction of Cu loads potentially intercepted by a planned treatment facility...

  3. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  4. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  5. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    . The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock......We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...

  6. Precise Estimation of Cosmological Parameters Using a More Accurate Likelihood Function

    Science.gov (United States)

    Sato, Masanori; Ichiki, Kiyotomo; Takeuchi, Tsutomu T.

    2010-12-01

    The estimation of cosmological parameters from a given data set requires a construction of a likelihood function which, in general, has a complicated functional form. We adopt a Gaussian copula and constructed a copula likelihood function for the convergence power spectrum from a weak lensing survey. We show that the parameter estimation based on the Gaussian likelihood erroneously introduces a systematic shift in the confidence region, in particular, for a parameter of the dark energy equation of state w. Thus, the copula likelihood should be used in future cosmological observations.

  7. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  8. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  9. Maximum likelihood estimates of pairwise rearrangement distances.

    Science.gov (United States)

    Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R

    2017-06-21

    Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Uncertainty relation in Schwarzschild spacetime

    Science.gov (United States)

    Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng

    2015-04-01

    We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 ⁡ c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.

  11. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2012-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally. The po

  12. Sampling variability in forensic likelihood-ratio computation: A simulation study

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond; Meuwly, Didier

    2015-01-01

    Recently, in the forensic biometric community, there is a growing interest to compute a metric called “likelihood- ratio” when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biomet- ric recognition system outputs a score and therefore a likelihood-ratio

  13. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  14. Network planning under uncertainties

    Science.gov (United States)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a

  15. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... the high rate of exit seen in the first years of exporting. Finally, when faced with multiple countries in which to export, some firms will choose to sequentially export in order to slowly learn more about its chances for success in untested markets....

  16. Relationships between General Self-efficacy, Intolerance of Uncertainty and Test Anxiety%高中生一般自我效能感、无法忍受不确定性与考试焦虑的关系

    Institute of Scientific and Technical Information of China (English)

    张荣娟

    2016-01-01

    Objective: The paper is to explore the relationship between general self-efficacy and test anxiety and the mediation of intolerance of uncertainty between them. Methods:400 senior middle school students at campus were measured with General Self-efficacy Scale, brief version of Intolerance of Uncer⁃tainty Scale and Test Anxiety Scale, and then the correlationship was analysized. Results:General self-ef⁃ficacy was negatively correlated with test anxiety, intolerance of uncertainty and its factors, while test anxi⁃ety was positively correlated with intolerance of uncertainty and its factors, and all the correlation coeffi⁃cients were significant. Linear regression analysis revealed that intolerance of uncertainty and its factors could entirely mediate the effect of general self-efficacy on test anxiety, and IUS-12 prospective anxiety factor accounted for much more variance of test anxiety than IUS-12 prohibitory anxiety factor. Conclu⁃sion:General self-efficacy could predict test anxiety directly or indirectly through the mediation of intoler⁃ance of uncertainty.%采用一般自我效能感量表、无法忍受不确定性简易量表及考试焦虑量表测量400名在校高中生,分析其相关性,探讨一般自我效能感、无法忍受不确定性和考试焦虑的关系。结果表明:一般自我效能感和考试焦虑、IU及其因子负相关显著,考试焦虑和无法忍受不确定性及其因子正相关显著;回归分析表明,无法忍受不确定性在一般自我效能感和考试焦虑之间起完全中介作用,在预测考试焦虑时,无法忍受不确定性的认知预期因子比行动抑制因子能解释更多的变异。一般自我效能感对考试焦虑有直接作用,并能通过无法忍受不确定性这一中介来影响考试焦虑。

  17. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  18. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  19. On an Extended Quasi-likelihood Estimation and a Diagnostic Test for Heteroscedasticity in the Generalized linear Models%广义线性模型中一类推广的拟似然估计与异方差性诊断检验

    Institute of Scientific and Technical Information of China (English)

    田茂再; 吴喜之

    2002-01-01

    本文考虑了随机设计情形下一类普通的异方差回归模型,在这个模型中,假定回归函数与方差函数之间的关系服从推广的广义非线性模型.该模型在实际中很常见,广义线性模型便是其特例.首先,我们导出了均值函数的局部加权拟似然估计;然后,用它来得到方差函数的估计,并且证明了这些估计有较好的性质;最后,建立了异方差检验统计量,文中的方法很吸引人.%A general heteroscedastic regression model is considered in a random design setting. In this model,the relationship between the regression function and the variance function is assumed to follow an extended generalized nonlinear model which is common in practice with the generalized linear models as its special cases. Locally-weighted-quasi-likelihood estimate for the mean function is derived and is then applied to obtain an estimate of the variance function. It is also demonstrated that such estimators are with good properties. A test for heteroscedasticity is established. The methodology employed in this article is attractive.

  20. Uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.

  1. Generalized Additive Models, Cubic Splines and Penalized Likelihood.

    Science.gov (United States)

    1987-05-22

    in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency

  2. Approximate Maximum Likelihood Commercial Bank Loan Management Model

    Directory of Open Access Journals (Sweden)

    Godwin N.O.   Asemota

    2009-01-01

    Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.

  3. Introductory statistical inference with the likelihood function

    CERN Document Server

    Rohde, Charles A

    2014-01-01

    This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...

  4. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  5. Maximum likelihood Jukes-Cantor triplets: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Hendy, Michael D; Snir, Sagi

    2006-03-01

    Maximum likelihood (ML) is a popular method for inferring a phylogenetic tree of the evolutionary relationship of a set of taxa, from observed homologous aligned genetic sequences of the taxa. Generally, the computation of the ML tree is based on numerical methods, which in a few cases, are known to converge to a local maximum on a tree, which is suboptimal. The extent of this problem is unknown, one approach is to attempt to derive algebraic equations for the likelihood equation and find the maximum points analytically. This approach has so far only been successful in the very simplest cases, of three or four taxa under the Neyman model of evolution of two-state characters. In this paper we extend this approach, for the first time, to four-state characters, the Jukes-Cantor model under a molecular clock, on a tree T on three taxa, a rooted triple. We employ spectral methods (Hadamard conjugation) to express the likelihood function parameterized by the path-length spectrum. Taking partial derivatives, we derive a set of polynomial equations whose simultaneous solution contains all critical points of the likelihood function. Using tools of algebraic geometry (the resultant of two polynomials) in the computer algebra packages (Maple), we are able to find all turning points analytically. We then employ this method on real sequence data and obtain realistic results on the primate-rodents divergence time.

  6. Maximum Likelihood Under Response Biased Sampling\\ud

    OpenAIRE

    Chambers, Raymond; Dorfman, Alan; Wang, Suojin

    2003-01-01

    Informative sampling occurs when the probability of inclusion in sample depends on\\ud the value of the survey response variable. Response or size biased sampling is a\\ud particular case of informative sampling where the inclusion probability is proportional\\ud to the value of this variable. In this paper we describe a general model for response\\ud biased sampling, which we call array sampling, and develop maximum likelihood and\\ud estimating equation theory appropriate to this situation. The ...

  7. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  8. Effect of Baseflow Separation on Uncertainty of Hydrological Modeling in the Xinanjiang Model

    Directory of Open Access Journals (Sweden)

    Kairong Lin

    2014-01-01

    Full Text Available Based on the idea of inputting more available useful information for evaluation to gain less uncertainty, this study focuses on how well the uncertainty can be reduced by considering the baseflow estimation information obtained from the smoothed minima method (SMM. The Xinanjiang model and the generalized likelihood uncertainty estimation (GLUE method with the shuffled complex evolution Metropolis (SCEM-UA sampling algorithm were used for hydrological modeling and uncertainty analysis, respectively. The Jiangkou basin, located in the upper of the Hanjiang River, was selected as case study. It was found that the number and standard deviation of behavioral parameter sets both decreased when the threshold value for the baseflow efficiency index increased, and the high Nash-Sutcliffe efficiency coefficients correspond well with the high baseflow efficiency coefficients. The results also showed that uncertainty interval width decreased significantly, while containing ratio did not decrease by much and the simulated runoff with the behavioral parameter sets can fit better to the observed runoff, when threshold for the baseflow efficiency index was taken into consideration. These implied that using the baseflow estimation information can reduce the uncertainty in hydrological modeling to some degree and gain more reasonable prediction bounds.

  9. Capturing and Displaying Uncertainty in the Common Tactical/Environmental Picture

    Science.gov (United States)

    2016-06-07

    multistatic active detection, and incorporated this characterization into a Bayesian track-before-detect system called, the Likelihood Ratio Tracker (LRT...for modeling and computing the distribution of the uncertainty in Signal Excess (SE) prediction for multistatic active detection of submarines...resulting from uncertainty in environmental predictions, and (2) to develop methods for accounting for this uncertainty in a Likelihood Ratio Tracker (LRT

  10. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-11-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences, yet few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from two different parameter uncertainty estimation methods: the Generalized Uncertainty Likelihood Estimator (GLUE, and the Shuffle Complex Evolution Metropolis (SCEM. Model ensembles are generated for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the presented probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of model performance associated with parameter uncertainty. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  11. Integration based profile likelihood calculation for PDE constrained parameter estimation problems

    Science.gov (United States)

    Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.

    2016-12-01

    Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.

  12. Coupled semivariogram uncertainty of hydrogeological and geophysical data on capture zone uncertainty analysis

    Science.gov (United States)

    Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.

    2008-01-01

    This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.

  13. Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.

    Science.gov (United States)

    1986-05-01

    consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol

  14. Statistical analysis of the Lognormal-Pareto distribution using Probability Weighted Moments and Maximum Likelihood

    OpenAIRE

    Marco Bee

    2012-01-01

    This paper deals with the estimation of the lognormal-Pareto and the lognormal-Generalized Pareto mixture distributions. The log-likelihood function is discontinuous, so that Maximum Likelihood Estimation is not asymptotically optimal. For this reason, we develop an alternative method based on Probability Weighted Moments. We show that the standard version of the method can be applied to the first distribution, but not to the latter. Thus, in the lognormal- Generalized Pareto case, we work ou...

  15. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  16. Feedback Stabilization by Robust Passivity of General Nonlinear Systems with Structural Uncertainty%具有结构不确定性一般非线形系统通过鲁棒无源的反馈镇定

    Institute of Scientific and Technical Information of China (English)

    蔡秀珊; 韩正之; 寇春海

    2005-01-01

    The general nonlinear system with structural uncertainty is dealt with and necessary conditions for it to be robust passivity are derived. From these necessary conditions, sufficient conditions of zero state detectability are deduced. Based on passive systems theory and the technique of feedback equivalence, sufficient conditions for it to be locally (globally) asymptotically stabilized via smooth state feedback are developed. A smooth state feedback control law can be constructed explicitly to locally (globally) stabilize the equilibrium of the closed-loop system. Simulation example shows the effectiveness of the method.

  17. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  18. Gaussian maximum likelihood and contextual classification algorithms for multicrop classification

    Science.gov (United States)

    Di Zenzo, Silvano; Bernstein, Ralph; Kolsky, Harwood G.; Degloria, Stephen D.

    1987-01-01

    The paper reviews some of the ways in which context has been handled in the remote-sensing literature, and additional possibilities are introduced. The problem of computing exhaustive and normalized class-membership probabilities from the likelihoods provided by the Gaussian maximum likelihood classifier (to be used as initial probability estimates to start relaxation) is discussed. An efficient implementation of probabilistic relaxation is proposed, suiting the needs of actual remote-sensing applications. A modified fuzzy-relaxation algorithm using generalized operations between fuzzy sets is presented. Combined use of the two relaxation algorithms is proposed to exploit context in multispectral classification of remotely sensed data. Results on both one artificially created image and one MSS data set are reported.

  19. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  20. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  1. Likelihood alarm displays. [for human operator

    Science.gov (United States)

    Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.

    1988-01-01

    In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.

  2. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  3. CORA: Emission Line Fitting with Maximum Likelihood

    Science.gov (United States)

    Ness, Jan-Uwe; Wichmann, Rainer

    2011-12-01

    CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.

  4. Asymptotic properties of maximum likelihood estimators in models with multiple change points

    CERN Document Server

    He, Heping; 10.3150/09-BEJ232

    2011-01-01

    Models with multiple change points are used in many fields; however, the theoretical properties of maximum likelihood estimators of such models have received relatively little attention. The goal of this paper is to establish the asymptotic properties of maximum likelihood estimators of the parameters of a multiple change-point model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the maximum likelihood estimators of the change points is established and the rate of convergence is determined; the asymptotic distribution of the maximum likelihood estimators of the parameters of the within-segment distributions is also derived. Since the approach used in single change-point models is not easily extended to multiple change-point models, these results require the introduction of those tools for analyzing the likelihood function in a multiple change-point model.

  5. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  6. Multi-Site Validation of the SWAT Model on the Bani Catchment: Model Performance and Predictive Uncertainty

    Directory of Open Access Journals (Sweden)

    Jamilatou Chaibou Begou

    2016-04-01

    Full Text Available The objective of this study was to assess the performance and predictive uncertainty of the Soil and Water Assessment Tool (SWAT model on the Bani River Basin, at catchment and subcatchment levels. The SWAT model was calibrated using the Generalized Likelihood Uncertainty Estimation (GLUE approach. Potential Evapotranspiration (PET and biomass were considered in the verification of model outputs accuracy. Global Sensitivity Analysis (GSA was used for identifying important model parameters. Results indicated a good performance of the global model at daily as well as monthly time steps with adequate predictive uncertainty. PET was found to be overestimated but biomass was better predicted in agricultural land and forest. Surface runoff represents the dominant process on streamflow generation in that region. Individual calibration at subcatchment scale yielded better performance than when the global parameter sets were applied. These results are very useful and provide a support to further studies on regionalization to make prediction in ungauged basins.

  7. Bayesian Assessment of the Uncertainties of Estimates of a Conceptual Rainfall-Runoff Model Parameters

    Science.gov (United States)

    Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.

    2014-12-01

    This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.

  8. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  9. Likelihood Approach to the First Dark Matter Results from XENON100

    CERN Document Server

    Aprile, E; Arneodo, F; Askin, A; Baudis, L; Behrens, A; Bokeloh, K; Brown, E; Bruch, T; Cardoso, J M R; Choi, B; Cline, D; Duchovni, E; Fattori, S; Ferella, A D; Giboni, K -L; Gross, E; Kish, A; Lam, C W; Lamblin, J; Lang, R F; Lim, K E; Lindemann, S; Lindner, M; Lopes, J A M; Undagoitia, T Marrodán; Mei, Y; Fernandez, A J Melgarejo; Ni, K; Oberlack, U; Orrigo, S E A; Pantic, E; Plante, G; Ribeiro, A C C; Santorelli, R; Santos, J M F dos; Schumann, M; Shagin, P; Teymourian, A; Thers, D; Tziaferi, E; Vitells, O; Wang, H; Weber, M; Weinheimer, C

    2011-01-01

    Many experiments that aim at the direct detection of Dark Matter are able to distinguish a dominant background from the expected feeble signals, based on some measured discrimination parameter. We develop a statistical model for such experiments using the Profile Likelihood ratio as a test statistic in a frequentist approach. We take data from calibrations as control measurements for signal and background, and the method allows the inclusion of data from Monte Carlo simulations. Systematic detector uncertainties, such as uncertainties in the energy scale, as well as astrophysical uncertainties, are included in the model. The statistical model can be used to either set an exclusion limit or to make a discovery claim, and the results are derived with a proper treatment of statistical and systematic uncertainties. We apply the model to the first data release of the XENON100 experiment, which allows to extract additional information from the data, and place stronger limits on the spin-independent elastic WIMP-nuc...

  10. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  11. Synthesizing Regression Results: A Factored Likelihood Method

    Science.gov (United States)

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  12. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  13. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  14. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  15. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  16. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the

  17. Secante hiperbólica generalizada y un método de estimación de sus parámetros: máxima verosimilitud modificada Generalized Secant Hyperbolic and a Method of Estimate of its Parameters: Maximum Likelihood Modified

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Másmela Caita

    2013-11-01

    Full Text Available Diversas distribuciones generalizadas se desarrollan en la literatura estadística, entre ellas se encuentra la distribución Secante Hiperbólica Generalizada (SHG. En este documento se presenta un método alternativo para la estimación de los parámetros poblacionales de la SHG, llamado Máxima Verosimilitud Modificada (MVM. Asumiendo algunas expresiones alternas que difieren con el trabajo de Vaughan en el 2002 y basándose en el mismo conjunto de datos de la fuente original. Se implementa computacionalmente el método transformado de MVM, permitiendo observar unas buenas aproximaciones de los valores de los parámetros de localización y escala, presentados por Vaughan en su artículo. Con esto se pretende que en la práctica se cuente con una metodología diferente para estimar.Different generalized distributions are developed in the statistical literature, among them it is the generalized secant hyperbolic distribution (SHG. This paper presents an alternative method for estimation the population parameters of the SHG, called Modified Maximum Likelihood (MVM. Assuming some alternate expressions that are different from Vaughan´s work in 2002, and based on the same set of data from the original source. It is implemented, the transformed method MVM is implemented computationally, it allows us to observe good approximations of the exact values of the parameters of location and scale, presented by Vaughan in his article. The aim is that in the practice you can use a different methodology to estimate.

  18. Uncertainty relation in Schwarzschild spacetime

    Directory of Open Access Journals (Sweden)

    Jun Feng

    2015-04-01

    Full Text Available We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time–energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit −log2⁡c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.

  19. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  20. Estimating the uncertainty in underresolved nonlinear dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Chorin, Alelxandre; Hald, Ole

    2013-06-12

    The Mori-Zwanzig formalism of statistical mechanics is used to estimate the uncertainty caused by underresolution in the solution of a nonlinear dynamical system. A general approach is outlined and applied to a simple example. The noise term that describes the uncertainty turns out to be neither Markovian nor Gaussian. It is argued that this is the general situation.

  1. Non-scalar uncertainty: Uncertainty in dynamic systems

    Science.gov (United States)

    Martinez, Salvador Gutierrez

    1992-01-01

    The following point is stated throughout the paper: dynamic systems are usually subject to uncertainty, be it the unavoidable quantic uncertainty when working with sufficiently small scales or when working in large scales uncertainty can be allowed by the researcher in order to simplify the problem, or it can be introduced by nonlinear interactions. Even though non-quantic uncertainty can generally be dealt with by using the ordinary probability formalisms, it can also be studied with the proposed non-scalar formalism. Thus, non-scalar uncertainty is a more general theoretical framework giving insight into the nature of uncertainty and providing a practical tool in those cases in which scalar uncertainty is not enough, such as when studying highly nonlinear dynamic systems. This paper's specific contribution is the general concept of non-scalar uncertainty and a first proposal for a methodology. Applications should be based upon this methodology. The advantage of this approach is to provide simpler mathematical models for prediction of the system states. Present conventional tools for dealing with uncertainty prove insufficient for an effective description of some dynamic systems. The main limitations are overcome abandoning ordinary scalar algebra in the real interval (0, 1) in favor of a tensor field with a much richer structure and generality. This approach gives insight into the interpretation of Quantum Mechanics and will have its most profound consequences in the fields of elementary particle physics and nonlinear dynamic systems. Concepts like 'interfering alternatives' and 'discrete states' have an elegant explanation in this framework in terms of properties of dynamic systems such as strange attractors and chaos. The tensor formalism proves especially useful to describe the mechanics of representing dynamic systems with models that are closer to reality and have relatively much simpler solutions. It was found to be wise to get an approximate solution to an

  2. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    Science.gov (United States)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  3. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented, s......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  4. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  5. Composite likelihood method for inferring local pedigrees

    Science.gov (United States)

    Nielsen, Rasmus

    2017-01-01

    Pedigrees contain information about the genealogical relationships among individuals and are of fundamental importance in many areas of genetic studies. However, pedigrees are often unknown and must be inferred from genetic data. Despite the importance of pedigree inference, existing methods are limited to inferring only close relationships or analyzing a small number of individuals or loci. We present a simulated annealing method for estimating pedigrees in large samples of otherwise seemingly unrelated individuals using genome-wide SNP data. The method supports complex pedigree structures such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can infer distant relatives more accurately than existing methods. Furthermore, we illustrate the utility of the method on a sample of Greenlandic Inuit. PMID:28827797

  6. Effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model output

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    This study analyses the effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model's discharge estimates. Prediction uncertainty bounds are derived using the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation (at a single station within the catchment) and a precipitation factor FPi. Thus, these factors provide a simplified representation of the spatial variation of precipitation, specifically the shape of the functional relationship between precipitation and height. In the absence of information about appropriate values of the precipitation factors FPi, these are estimated through standard calibration procedures. The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. Monte Carlo samples of the model output are obtained by randomly varying the model parameters within their feasible ranges. In the first experiment, the precipitation factors FPi are considered unknown and thus included in the sampling process. The total number of unknown parameters in this case is 16. In the second experiment, precipitation factors FPi are estimated a priori, by means of a long term water balance between observed discharge at the catchment outlet, evapotranspiration estimates and observed precipitation. In this case, the number of unknown parameters reduces to 11. The feasible ranges assigned to the precipitation factors in the first experiment are slightly wider than the range of fixed precipitation factors used in the second experiment. The mean squared error of the Box-Cox transformed discharge during the calibration period is used for the evaluation of the

  7. Optimal design under uncertainty of a passive defense structure against snow avalanches: from a general Bayesian framework to a simple analytical model

    Directory of Open Access Journals (Sweden)

    N. Eckert

    2008-10-01

    Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.

  8. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  9. Maximum likelihood continuity mapping for fraud detection

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1997-05-01

    The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.

  10. Likelihood methods and classical burster repetition

    CERN Document Server

    Graziani, C; Graziani, Carlo; Lamb, Donald Q

    1995-01-01

    We develop a likelihood methodology which can be used to search for evidence of burst repetition in the BATSE catalog, and to study the properties of the repetition signal. We use a simplified model of burst repetition in which a number N_{\\rm r} of sources which repeat a fixed number of times N_{\\rm rep} are superposed upon a number N_{\\rm nr} of non-repeating sources. The instrument exposure is explicitly taken into account. By computing the likelihood for the data, we construct a probability distribution in parameter space that may be used to infer the probability that a repetition signal is present, and to estimate the values of the repetition parameters. The likelihood function contains contributions from all the bursts, irrespective of the size of their positional errors --- the more uncertain a burst's position is, the less constraining is its contribution. Thus this approach makes maximal use of the data, and avoids the ambiguities of sample selection associated with data cuts on error circle size. We...

  11. Database likelihood ratios and familial DNA searching

    CERN Document Server

    Slooten, Klaas

    2012-01-01

    Familial Searching is the process of searching in a DNA database for relatives of a given individual. It is well known that in order to evaluate the genetic evidence in favour of a certain given form of relatedness between two individuals, one needs to calculate the appropriate likelihood ratio, which is in this context called a Kinship Index. Suppose that the database contains, for a given type of relative, at most one related individual. Given prior probabilities of being the relative for all persons in the database, we derive the likelihood ratio for each database member in favour of being that relative. This likelihood ratio takes all the Kinship Indices between target and members of the database into account. We also compute the corresponding posterior probabilities. We then discuss two ways of selecting a subset from the database that contains the relative with a known probability, or at least a useful lower bound thereof. We discuss the relation between these approaches and illustrate them with Familia...

  12. Uncertainty relations based on skew information with quantum memory

    Science.gov (United States)

    Ma, ZhiHao; Chen, ZhiHua; Fei, Shao-Ming

    2017-01-01

    We present a new uncertainty relation by defining a measure of uncertainty based on skew information. For bipartite systems, we establish uncertainty relations with the existence of a quantum memory. A general relation between quantum correlations and tight bounds of uncertainty has been presented.

  13. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    Science.gov (United States)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This

  14. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    Energy Technology Data Exchange (ETDEWEB)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Hogg, David W., E-mail: iczekala@cfa.harvard.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY, 10003 (United States)

    2015-10-20

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  15. Constructing a Flexible Likelihood Function for Spectroscopic Inference

    Science.gov (United States)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Hogg, David W.; Green, Gregory M.

    2015-10-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.

  16. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  17. Taming systematic uncertainties at the LHC with the central limit theorem

    Energy Technology Data Exchange (ETDEWEB)

    Fichet, Sylvain, E-mail: sylvain@ift.unesp.br

    2016-10-15

    We study the simplifications occurring in any likelihood function in the presence of a large number of small systematic uncertainties. We find that the marginalisation of these uncertainties can be done analytically by means of second-order error propagation, error combination, the Lyapunov central limit theorem, and under mild approximations which are typically satisfied for LHC likelihoods. The outcomes of this analysis are i) a very light treatment of systematic uncertainties ii) a convenient way of reporting the main effects of systematic uncertainties, such as the detector effects occurring in LHC measurements.

  18. Taming systematic uncertainties at the LHC with the central limit theorem

    CERN Document Server

    Fichet, Sylvain

    2016-01-01

    We study the simplifications occurring in any likelihood function in the presence of a large number of small systematic uncertainties. We find that the marginalisation of these uncertainties can be done analytically by means of second-order error propagation, error combination, the Lyapunov central limit theorem, and under mild approximations which are typically satisfied for LHC likelihoods. The outcomes of this analysis are i) a very light treatment of systematic uncertainties ii) a convenient way of reporting the main effects of systematic uncertainties such as the detector effects occuring in LHC measurements.

  19. Strong Consistency of Maximum Quasi-Likelihood Estimator in Quasi-Likelihood Nonlinear Models%拟似然非线性模型中最大拟似然估计的强相合性

    Institute of Scientific and Technical Information of China (English)

    夏天; 孔繁超

    2008-01-01

    This paper proposes some regularity conditions.On the basis of the proposed regularity conditions,we show the strong consistency of maximum quasi-likelihood estimation (MQLE)in quasi-likelihood nonlinear models (QLNM).Our results may he regarded as a further generalization of the relevant results in Ref.[4].

  20. Verification of uncertainty budgets

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Madsen, B.S.

    2005-01-01

    The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data, and th...

  1. Mate choice and uncertainty in the decision process.

    Science.gov (United States)

    Wiegmann, Daniel D; Angeloni, Lisa M

    2007-12-21

    The behavior of females in search of a mate determines the likelihood that a high quality male is encountered in the search process and alternative search strategies provide different fitness returns to searchers. Models of search behavior are typically formulated on an assumption that the quality of prospective mates is revealed to searchers without error, either directly or by inspection of a perfectly informative phenotypic character. But recent theoretical developments suggest that the relative performance of a search strategy may be sensitive to any uncertainty associated with the to-be-realized fitness benefit of mate choice decisions. Indeed, uncertainty in the decision process is inevitable whenever unobserved male attributes influence the fitness of searchers. In this paper, we derive solutions to the sequential search strategy and the fixed sample search strategy for the general situation in which observed and unobserved male attributes affect the fitness consequences of female mate choice decisions and we determine how the magnitude of various parameters that are influential in the standard models alter these more general solutions. The distribution of unobserved attributes amongst prospective mates determines the uncertainty of mate choice decisions-the reliability of an observed male character as a predictor of male quality-and the realized functional relationship between an observed male character and the fitness return to searchers. The uncertainty of mate choice decisions induced by unobserved male attributes has no influence on the generalized model solutions. Thus, the results of earlier studies of these search models that rely on the use of a perfectly informative male character apply even if an observed male trait does not reveal the quality of prospective mates with certainty. But the solutions are sensitive to any changes of the distribution of unobserved male attributes that alter the realized functional relationship between an observed

  2. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.

    Science.gov (United States)

    Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana

    2012-05-15

    Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability

  3. Molecular clock fork phylogenies: closed form analytic maximum likelihood solutions.

    Science.gov (United States)

    Chor, Benny; Snir, Sagi

    2004-12-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM) are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model-three-taxa, two-state characters, under a molecular clock. Quoting Ziheng Yang, who initiated the analytic approach,"this seems to be the simplest case, but has many of the conceptual and statistical complexities involved in phylogenetic estimation."In this work, we give general analytic solutions for a family of trees with four-taxa, two-state characters, under a molecular clock. The change from three to four taxa incurs a major increase in the complexity of the underlying algebraic system, and requires novel techniques and approaches. We start by presenting the general maximum likelihood problem on phylogenetic trees as a constrained optimization problem, and the resulting system of polynomial equations. In full generality, it is infeasible to solve this system, therefore specialized tools for the molecular clock case are developed. Four-taxa rooted trees have two topologies-the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). We combine the ultrametric properties of molecular clock fork trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations for the fork. We finally employ symbolic algebra software to obtain closed formanalytic solutions (expressed parametrically in the input data). In general, four-taxa trees can have multiple ML points. In contrast, we can now prove that each fork topology has a unique(local and global) ML point.

  4. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  5. Mean square convergence rates for maximum quasi-likelihood estimator

    Directory of Open Access Journals (Sweden)

    Arnoud V. den Boer

    2015-03-01

    Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.

  6. Use of different sampling schemes in machine learning-based prediction of hydrological models' uncertainty

    Science.gov (United States)

    Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann

    2013-04-01

    In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and

  7. Visualising Uncertainty for Decision Support

    Science.gov (United States)

    2016-12-01

    uncertainty is generally associated with the rendering models and algorithms used to generate the visualisation. For example, rendered 3D scenes will...emotional expression (smiling, neutral and frowning) (see Figure 24). In this study, participants were asked to view all faces and assess the

  8. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  9. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  10. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  11. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-09-03

    We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.

  12. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  13. Exact likelihood-free Markov chain Monte Carlo for elliptically contoured distributions.

    Science.gov (United States)

    Muchmore, Patrick; Marjoram, Paul

    2015-08-01

    Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased estimator of the likelihood can have a stationary distribution identical to that of a chain based on exact likelihood calculations. In this paper we develop such an estimator for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this estimator, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a likelihood based chain, but does not require explicit likelihood calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible.

  14. Small-sample likelihood inference in extreme-value regression models

    CERN Document Server

    Ferrari, Silvia L P

    2012-01-01

    We deal with a general class of extreme-value regression models introduced by Barreto- Souza and Vasconcellos (2011). Our goal is to derive an adjusted likelihood ratio statistic that is approximately distributed as \\c{hi}2 with a high degree of accuracy. Although the adjusted statistic requires more computational effort than its unadjusted counterpart, it is shown that the adjustment term has a simple compact form that can be easily implemented in standard statistical software. Further, we compare the finite sample performance of the three classical tests (likelihood ratio, Wald, and score), the gradient test that has been recently proposed by Terrell (2002), and the adjusted likelihood ratio test obtained in this paper. Our simulations favor the latter. Applications of our results are presented. Key words: Extreme-value regression; Gradient test; Gumbel distribution; Likelihood ratio test; Nonlinear models; Score test; Small-sample adjustments; Wald test.

  15. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  16. CMB Power Spectrum Likelihood with ILC

    CERN Document Server

    Dick, Jason; Delabrouille, Jacques

    2012-01-01

    We extend the ILC method in harmonic space to include the error in its CMB estimate. This allows parameter estimation routines to take into account the effect of the foregrounds as well as the errors in their subtraction in conjunction with the ILC method. Our method requires the use of a model of the foregrounds which we do not develop here. The reduction of the foreground level makes this method less sensitive to unaccounted for errors in the foreground model. Simulations are used to validate the calculations and approximations used in generating this likelihood function.

  17. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...

  18. Assessing uncertainties in solute transport models: Upper Narew case study

    Science.gov (United States)

    Osuch, M.; Romanowicz, R.; Napiórkowski, J. J.

    2009-04-01

    This paper evaluates uncertainties in two solute transport models based on tracer experiment data from the Upper River Narew. Data Based Mechanistic and transient storage models were applied to Rhodamine WT tracer observations. We focus on the analysis of uncertainty and the sensitivity of model predictions to varying physical parameters, such as dispersion and channel geometry. An advection-dispersion model with dead zones (Transient Storage model) adequately describes the transport of pollutants in a single channel river with multiple storage. The applied transient storage model is deterministic; it assumes that observations are free of errors and the model structure perfectly describes the process of transport of conservative pollutants. In order to take into account the model and observation errors, an uncertainty analysis is required. In this study we used a combination of the Generalized Likelihood Uncertainty Estimation technique (GLUE) and the variance based Global Sensitivity Analysis (GSA). The combination is straightforward as the same samples (Sobol samples) were generated for GLUE analysis and for sensitivity assessment. Additionally, the results of the sensitivity analysis were used to specify the best parameter ranges and their prior distributions for the evaluation of predictive model uncertainty using the GLUE methodology. Apart from predictions of pollutant transport trajectories, two ecological indicators were also studied (time over the threshold concentration and maximum concentration). In particular, a sensitivity analysis of the length of "over the threshold" period shows an interesting multi-modal dependence on model parameters. This behavior is a result of the direct influence of parameters on different parts of the dynamic response of the system. As an alternative to the transient storage model, a Data Based Mechanistic approach was tested. Here, the model is identified and the parameters are estimated from available time series data using

  19. Uncertainty assessment of water quality modeling for a small-scale urban catchment using the GLUE methodology: a case study in Shanghai, China.

    Science.gov (United States)

    Zhang, Wei; Li, Tian; Dai, Meihong

    2015-06-01

    There is often great uncertainty in water quality modeling for urban drainage systems because water quality variation in systems is complex and affected by many factors. The stormwater management model (SWMM) was applied to a small-scale urban catchment with a simple and well-maintained stormwater drainage system without illicit connections. This was done to assess uncertainty in build-up and wash-off modeling of pollutants within the generalized likelihood uncertainty estimation (GLUE) methodology, based on a well-calibrated water quantity model. The results indicated great uncertainty of water quality modeling within the GLUE methodology. Comparison of uncertainties in various pollutant build-up and wash-off models that were available in SWMM indicated that those uncertainties varied slightly. This may be a consequence of the specific characteristics of rainfall events and experimental sites used in the study. The uncertainty analysis of water quality parameters in SWMM is conducive to effectively evaluating model reliability, and provides an experience base for similar research and applications.

  20. Adaptive framework for uncertainty analysis in electromagnetic field measurements.

    Science.gov (United States)

    Prieto, Javier; Alonso, Alonso A; de la Rosa, Ramón; Carrera, Albano

    2015-04-01

    Misinterpretation of uncertainty in the measurement of the electromagnetic field (EMF) strength may lead to an underestimation of exposure risk or an overestimation of required measurements. The Guide to the Expression of Uncertainty in Measurement (GUM) has internationally been adopted as a de facto standard for uncertainty assessment. However, analyses under such an approach commonly assume unrealistic static models or neglect relevant prior information, resulting in non-robust uncertainties. This study proposes a principled and systematic framework for uncertainty analysis that fuses information from current measurements and prior knowledge. Such a framework dynamically adapts to data by exploiting a likelihood function based on kernel mixtures and incorporates flexible choices of prior information by applying importance sampling. The validity of the proposed techniques is assessed from measurements performed with a broadband radiation meter and an isotropic field probe. The developed framework significantly outperforms GUM approach, achieving a reduction of 28% in measurement uncertainty.

  1. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    Yi DAI; Zhao-jun WANG; Chang-liang ZOU

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method.Sullivan and woodall pointed out the test statistic lrt (n1, n2) is approximately distributed as x2 (2) as the sample size n, n1 and n2 are very large, and the value of n1 = 2, 3,..., n- 2 and that of n2 = n- n1.So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained.In addition, the properties of the standardized likelihood ratio statistic slr(n1,n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i ≠ n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both.Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  2. CUSUM control charts based on likelihood ratio for preliminary analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To detect and estimate a shift in either the mean and the deviation or both for the preliminary analysis, the statistical process control (SPC) tool, the control chart based on the likelihood ratio test (LRT), is the most popular method. Sullivan and woodall pointed out the test statistic lrt(n1, n2) is approximately distributed as x2(2) as the sample size n,n1 and n2 are very large, and the value of n1 = 2,3,..., n - 2 and that of n2 = n - n1. So it is inevitable that n1 or n2 is not large. In this paper the limit distribution of lrt(n1, n2) for fixed n1 or n2 is figured out, and the exactly analytic formulae for evaluating the expectation and the variance of the limit distribution are also obtained. In addition, the properties of the standardized likelihood ratio statistic slr(n1, n) are discussed in this paper. Although slr(n1, n) contains the most important information, slr(i, n)(i≠n1) also contains lots of information. The cumulative sum (CUSUM) control chart can obtain more information in this condition. So we propose two CUSUM control charts based on the likelihood ratio statistics for the preliminary analysis on the individual observations. One focuses on detecting the shifts in location in the historical data and the other is more general in detecting a shift in either the location and the scale or both. Moreover, the simulated results show that the proposed two control charts are, respectively, superior to their competitors not only in the detection of the sustained shifts but also in the detection of some other out-of-control situations considered in this paper.

  3. Modeling and inverse problems in the presence of uncertainty

    CERN Document Server

    Banks, H T; Thompson, W Clayton

    2014-01-01

    Modeling and Inverse Problems in the Presence of Uncertainty collects recent research-including the authors' own substantial projects-on uncertainty propagation and quantification. It covers two sources of uncertainty: where uncertainty is present primarily due to measurement errors and where uncertainty is present due to the modeling formulation itself. After a useful review of relevant probability and statistical concepts, the book summarizes mathematical and statistical aspects of inverse problem methodology, including ordinary, weighted, and generalized least-squares formulations. It then

  4. LIKEDM: Likelihood calculator of dark matter detection

    Science.gov (United States)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  5. Multiplicative earthquake likelihood models incorporating strain rates

    Science.gov (United States)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.

  6. CORA - emission line fitting with Maximum Likelihood

    Science.gov (United States)

    Ness, J.-U.; Wichmann, R.

    2002-07-01

    The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.

  7. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  8. Evidence for extra radiation? Profile likelihood versus Bayesian posterior

    CERN Document Server

    Hamann, Jan

    2011-01-01

    A number of recent analyses of cosmological data have reported hints for the presence of extra radiation beyond the standard model expectation. In order to test the robustness of these claims under different methods of constructing parameter constraints, we perform a Bayesian posterior-based and a likelihood profile-based analysis of current data. We confirm the presence of a slight discrepancy between posterior- and profile-based constraints, with the marginalised posterior preferring higher values of the effective number of neutrino species N_eff. This can be traced back to a volume effect occurring during the marginalisation process, and we demonstrate that the effect is related to the fact that cosmic microwave background (CMB) data constrain N_eff only indirectly via the redshift of matter-radiation equality. Once present CMB data are combined with external information about, e.g., the Hubble parameter, the difference between the methods becomes small compared to the uncertainty of N_eff. We conclude tha...

  9. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.

  10. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  11. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  12. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  13. Gamma-Ray Telescope and Uncertainty Principle

    Science.gov (United States)

    Shivalingaswamy, T.; Kagali, B. A.

    2012-01-01

    Heisenberg's Uncertainty Principle is one of the important basic principles of quantum mechanics. In most of the books on quantum mechanics, this uncertainty principle is generally illustrated with the help of a gamma ray microscope, wherein neither the image formation criterion nor the lens properties are taken into account. Thus a better…

  14. Stochastic variational approach to minimum uncertainty states

    CERN Document Server

    Illuminati, F; Illuminati, F; Viola, L

    1995-01-01

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schr\\"{o}dinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials.

  15. Symbolic computation for evaluation of measurement uncertainty

    OpenAIRE

    Wei, P.; Yang, QP; Salleh; Jones, BE

    2007-01-01

    In recent years, with the rapid development of symbolic computation, the integration of symbolic and numeric methods is increasingly applied in various applications. This paper proposed the use of symbolic computation for the evaluation of measurement uncertainty. The general method and procedure are discussed, and its great potential and powerful features for measurement uncertainty evaluation has been demonstrated through examples.

  16. Public understanding of visual representations of uncertainty in temperature forecasts

    NARCIS (Netherlands)

    Tak, S.W.; Toet, A.; Erp, J.B.F. van

    2015-01-01

    Multiday weather forecasts often include graphical representations of uncertainty. However, visual representations of probabilistic events are often misinterpreted by the general public. Although various uncertainty visualizations are now in use, the parameters that determine their successful deploy

  17. Uncertainty Communication. Issues and good practice

    Energy Technology Data Exchange (ETDEWEB)

    Kloprogge, P.; Van der Sluijs, J.; Wardekker, A.

    2007-12-15

    In 2003 the Netherlands Environmental Assessment Agency (MNP) published the RIVM/MNP Guidance for Uncertainty Assessment and Communication. The Guidance assists in dealing with uncertainty in environmental assessments. Dealing with uncertainty is essential because assessment results regarding complex environmental issues are of limited value if the uncertainties have not been taken into account adequately. A careful analysis of uncertainties in an environmental assessment is required, but even more important is the effective communication of these uncertainties in the presentation of assessment results. The Guidance yields rich and differentiated insights in uncertainty, but the relevance of this uncertainty information may vary across audiences and uses of assessment results. Therefore, the reporting of uncertainties is one of the six key issues that is addressed in the Guidance. In practice, users of the Guidance felt a need for more practical assistance in the reporting of uncertainty information. This report explores the issue of uncertainty communication in more detail, and contains more detailed guidance on the communication of uncertainty. In order to make this a 'stand alone' document several questions that are mentioned in the detailed Guidance have been repeated here. This document thus has some overlap with the detailed Guidance. Part 1 gives a general introduction to the issue of communicating uncertainty information. It offers guidelines for (fine)tuning the communication to the intended audiences and context of a report, discusses how readers of a report tend to handle uncertainty information, and ends with a list of criteria that uncertainty communication needs to meet to increase its effectiveness. Part 2 helps writers to analyze the context in which communication takes place, and helps to map the audiences, and their information needs. It further helps to reflect upon anticipated uses and possible impacts of the uncertainty information on

  18. Inference in HIV dynamics models via hierarchical likelihood

    OpenAIRE

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...

  19. tmle : An R Package for Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Susan Gruber

    2012-11-01

    Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.

  20. Nonparametric likelihood based estimation of linear filters for point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2015-01-01

    result is a representation of the gradient of the log-likelihood, which we use to derive computable approximations of the log-likelihood and the gradient by time discretization. These approximations are then used to minimize the approximate penalized log-likelihood. For time and memory efficiency...

  1. Critical loads - assessment of uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Barkman, A.

    1998-10-01

    The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data

  2. Uncertainty in hydrological change modelling

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige

    Hydrological change modelling methodologies generally use climate models outputs to force hydrological simulations under changed conditions. There are nested sources of uncertainty throughout this methodology, including choice of climate model and subsequent bias correction methods. This Ph.......D. study evaluates the uncertainty of the impact of climate change in hydrological simulations given multiple climate models and bias correction methods of varying complexity. Three distribution based scaling methods (DBS) were developed and benchmarked against a more simplistic and commonly used delta...... change (DC) approach. These climate model projections were then used to force hydrological simulations under climate change for the island Sjælland in Denmark to analyse the contribution of different climate models and bias correction methods to overall uncertainty in the hydrological change modelling...

  3. Nanoparticles: Uncertainty Risk Analysis

    DEFF Research Database (Denmark)

    Grieger, Khara Deanne; Hansen, Steffen Foss; Baun, Anders

    2012-01-01

    Scientific uncertainty plays a major role in assessing the potential environmental risks of nanoparticles. Moreover, there is uncertainty within fundamental data and information regarding the potential environmental and health risks of nanoparticles, hampering risk assessments based on standard a...

  4. Uncertainties in Site Amplification Estimation

    Science.gov (United States)

    Cramer, C. H.; Bonilla, F.; Hartzell, S.

    2004-12-01

    Typically geophysical profiles (layer thickness, velocity, density, Q) and dynamic soil properties (modulus and damping versus strain curves) are used with appropriate input ground motions in a soil response computer code to estimate site amplification. Uncertainties in observations can be used to generate a distribution of possible site amplifications. The biggest sources of uncertainty in site amplifications estimates are the uncertainties in (1) input ground motions, (2) shear-wave velocities (Vs), (3) dynamic soil properties, (4) soil response code used, and (5) dynamic pore pressure effects. A study of site amplification was conducted for the 1 km thick Mississippi embayment sediments beneath Memphis, Tennessee (see USGS OFR 04-1294 on the web). In this study, the first three sources of uncertainty resulted in a combined coefficient of variation of 10 to 60 percent. The choice of soil response computer program can lead to uncertainties in median estimates of +/- 50 percent. Dynamic pore pressure effects due to the passing of seismic waves in saturated soft sediments are normally not considered in site-amplification studies and can contribute further large uncertainties in site amplification estimates. The effects may range from dilatancy and high-frequency amplification (such as observed at some sites during the 1993 Kushiro-Oki, Japan and 2001 Nisqually, Washington earthquakes) or general soil failure and deamplification of ground motions (such as observed at Treasure Island during the 1989 Loma Prieta, California earthquake). Examples of two case studies using geotechnical data for downhole arrays in Kushiro, Japan and the Wildlife Refuge, California using one dynamic code, NOAH, will be presented as examples of modeling uncertainties associated with these effects. Additionally, an example of inversion for estimates of in-situ dilatancy-related geotechnical modeling parameters will be presented for the Kushiro, Japan site.

  5. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    Science.gov (United States)

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  6. MLDS: Maximum Likelihood Difference Scaling in R

    Directory of Open Access Journals (Sweden)

    Kenneth Knoblauch

    2008-01-01

    Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.

  7. Parameter likelihood of intrinsic ellipticity correlations

    CERN Document Server

    Capranico, Federica; Schaefer, Bjoern Malte

    2012-01-01

    Subject of this paper are the statistical properties of ellipticity alignments between galaxies evoked by their coupled angular momenta. Starting from physical angular momentum models, we bridge the gap towards ellipticity correlations, ellipticity spectra and derived quantities such as aperture moments, comparing the intrinsic signals with those generated by gravitational lensing, with the projected galaxy sample of EUCLID in mind. We investigate the dependence of intrinsic ellipticity correlations on cosmological parameters and show that intrinsic ellipticity correlations give rise to non-Gaussian likelihoods as a result of nonlinear functional dependencies. Comparing intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude of their contaminating effect on the estimation of cosmological parameters and find that biases on dark energy parameters are very small in an angular-momentum based model in contrast to the linear alignment model commonly used. Finally, we quantify whether intrins...

  8. Dishonestly increasing the likelihood of winning

    Directory of Open Access Journals (Sweden)

    Shaul Shalvi

    2012-05-01

    Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.

  9. Constraining Parameter Uncertainty in Simulations of Water and Heat Dynamics in Seasonally Frozen Soil Using Limited Observed Data

    Directory of Open Access Journals (Sweden)

    Mousong Wu

    2016-02-01

    Full Text Available Water and energy processes in frozen soils are important for better understanding hydrologic processes and water resources management in cold regions. To investigate the water and energy balance in seasonally frozen soils, CoupModel combined with the generalized likelihood uncertainty estimation (GLUE method was used. Simulation work on water and heat processes in frozen soil in northern China during the 2012/2013 winter was conducted. Ensemble simulations through the Monte Carlo sampling method were generated for uncertainty analysis. Behavioral simulations were selected based on combinations of multiple model performance index criteria with respect to simulated soil water and temperature at four depths (5 cm, 15 cm, 25 cm, and 35 cm. Posterior distributions for parameters related to soil hydraulic, radiation processes, and heat transport indicated that uncertainties in both input and model structures could influence model performance in modeling water and heat processes in seasonally frozen soils. Seasonal courses in water and energy partitioning were obvious during the winter. Within the day-cycle, soil evaporation/condensation and energy distributions were well captured and clarified as an important phenomenon in the dynamics of the energy balance system. The combination of the CoupModel simulations with the uncertainty-based calibration method provides a way of understanding the seasonal courses of hydrology and energy processes in cold regions with limited data. Additional measurements may be used to further reduce the uncertainty of regulating factors during the different stages of freezing–thawing.

  10. Where does the uncertainty come from? Attributing Uncertainty in Conceptual Hydrologic Modelling

    Science.gov (United States)

    Abu Shoaib, S.; Marshall, L. A.; Sharma, A.

    2015-12-01

    Defining an appropriate forecasting model is a key phase in water resources planning and design. Quantification of uncertainty is an important step in the development and application of hydrologic models. In this study, we examine the dependency of hydrologic model uncertainty on the observed model inputs, defined model structure, parameter optimization identifiability and identified likelihood. We present here a new uncertainty metric, the Quantile Flow Deviation or QFD, to evaluate the relative uncertainty due to each of these sources under a range of catchment conditions. Through the metric, we may identify the potential spectrum of uncertainty and variability in model simulations. The QFD assesses uncertainty by estimating the deviation in flows at a given quantile across a range of scenarios. By using a quantile based metric, the change in uncertainty across individual percentiles can be assessed, thereby allowing uncertainty to be expressed as a function of time. The QFD method can be disaggregated to examine any part of the modelling process including the selection of certain model subroutines or forcing data. Case study results (including catchments in Australia and USA) suggest that model structure selection is vital irrespective of the flow percentile of interest or the catchment being studied. Examining the QFD across various quantiles additionally demonstrates that lower yielding catchments may have greater variation due to selected model structures. By incorporating multiple model structures, it is possible to assess (i) the relative importance of various sources of uncertainty, (ii) how these vary with the change in catchment location or hydrologic regime; and (iii) the impact of the length of available observations in uncertainty quantification.

  11. Uncertainty Relations in Terms of Fisher Information

    Institute of Scientific and Technical Information of China (English)

    LUO Shun-Long

    2001-01-01

    By virtue of the well-known concept of Fisher information in the theory of statistical inference, we obtain an inequality chain which generalizes and refines the conventional Heisenberg uncertainty relations.``

  12. Entanglement and discord assisted entropic uncertainty relations under decoherence

    Science.gov (United States)

    Yao, ChunMei; Chen, ZhiHua; Ma, ZhiHao; Severini, Simone; Serafini, Alessio

    2014-09-01

    The uncertainty principle is a crucial aspect of quantum mechanics. It has been shown that quantum entanglement as well as more general notions of correlations, such as quantum discord, can relax or tighten the entropic uncertainty relation in the presence of an ancillary system. We explored the behaviour of entropic uncertainty relations for system of two qubits-one of which subjects to several forms of independent quantum noise, in both Markovian and non-Markovian regimes. The uncertainties and their lower bounds, identified by the entropic uncertainty relations, increase under independent local unital Markovian noisy channels, but they may decrease under non-unital channels. The behaviour of the uncertainties (and lower bounds) exhibit periodical oscillations due to correlation dynamics under independent non-Markovian reservoirs. In addition, we compare different entropic uncertainty relations in several special cases and find that discord-tightened entropic uncertainty relations offer in general a better estimate of the uncertainties in play.

  13. Generalization of stochastic visuomotor rotations.

    Directory of Open Access Journals (Sweden)

    Hugo L Fernandes

    Full Text Available Generalization studies examine the influence of perturbations imposed on one movement onto other movements. The strength of generalization is traditionally interpreted as a reflection of the similarity of the underlying neural representations. Uncertainty fundamentally affects both sensory integration and learning and is at the heart of many theories of neural representation. However, little is known about how uncertainty, resulting from variability in the environment, affects generalization curves. Here we extend standard movement generalization experiments to ask how uncertainty affects the generalization of visuomotor rotations. We find that although uncertainty affects how fast subjects learn, the perturbation generalizes independently of uncertainty.

  14. Evaluating uncertainty estimates in hydrologic models: borrowing measures from the forecast verification community

    Directory of Open Access Journals (Sweden)

    K. J. Franz

    2011-03-01

    Full Text Available The hydrologic community is generally moving towards the use of probabilistic estimates of streamflow, primarily through the implementation of Ensemble Streamflow Prediction (ESP systems, ensemble data assimilation methods, or multi-modeling platforms. However, evaluation of probabilistic outputs has not necessarily kept pace with ensemble generation. Much of the modeling community is still performing model evaluation using standard deterministic measures, such as error, correlation, or bias, typically applied to the ensemble mean or median. Probabilistic forecast verification methods have been well developed, particularly in the atmospheric sciences yet, few have been adopted for evaluating uncertainty estimates in hydrologic model simulations. In the current paper, we overview existing probabilistic forecast verification methods and apply the methods to evaluate and compare model ensembles produced from different parameter uncertainty estimation methods. The Generalized Uncertainty Likelihood Estimator (GLUE, a modified version of GLUE, and the Shuffle Complex Evolution Metropolis (SCEM are used to generate model ensembles for the National Weather Service SACramento Soil Moisture Accounting (SAC-SMA model for 12 forecast basins located in the Southeastern United States. We evaluate the model ensembles using relevant metrics in the following categories: distribution, correlation, accuracy, conditional statistics, and categorical statistics. We show that the probabilistic metrics are easily adapted to model simulation ensembles and provide a robust analysis of parameter uncertainty, one that is commensurate with the dimension of the ensembles themselves. Application of these methods requires no information in addition to what is already available as part of traditional model validation methodology and considers the entire ensemble or uncertainty range in the approach.

  15. Sensitivity and predictive uncertainty of the ACASA model at a spruce forest site

    Directory of Open Access Journals (Sweden)

    K. Staudt

    2010-06-01

    Full Text Available The sensitivity and predictive uncertainty of the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA was assessed by employing the Generalized Likelihood Uncertainty Estimation (GLUE method. ACASA is a stand-scale, multi-layer soil-vegetation-atmosphere transfer model that incorporates a third order closure method to simulate the turbulent exchange of energy and matter within and above the canopy. Fluxes simulated by the model were compared to sensible and latent heat fluxes as well as the net ecosystem exchange measured by an eddy-covariance system above the spruce canopy at the FLUXNET-station Waldstein-Weidenbrunnen in the Fichtelgebirge Mountains in Germany. From each of the intensive observation periods carried out within the EGER project (ExchanGE processes in mountainous Regions in autumn 2007 and summer 2008, five days of flux measurements were selected. A large number (20 000 of model runs using randomly generated parameter sets were performed and goodness of fit measures for all fluxes for each of these runs calculated. The 10% best model runs for each flux were used for further investigation of the sensitivity of the fluxes to parameter values and to calculate uncertainty bounds.

    A strong sensitivity of the individual fluxes to a few parameters was observed, such as the leaf area index. However, the sensitivity analysis also revealed the equifinality of many parameters in the ACASA model for the investigated periods. The analysis of two time periods, each representing different meteorological conditions, provided an insight into the seasonal variation of parameter sensitivity. The calculated uncertainty bounds demonstrated that all fluxes were well reproduced by the ACASA model. In general, uncertainty bounds encompass measured values better when these are conditioned on the respective individual flux only and not on all three fluxes concurrently. Structural weaknesses of the ACASA model concerning the soil respiration

  16. Transfer Entropy as a Log-likelihood Ratio

    CERN Document Server

    Barnett, Lionel

    2012-01-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the neurosciences, econometrics and the analysis of complex system dynamics in diverse fields. We show that for a class of parametrised partial Markov models for jointly stochastic processes in discrete time, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. The result generalises the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression. In the general case, an asymptotic $\\chi^2$ distribution for the model transfer entropy estimator is established.

  17. On the Performance of Maximum Likelihood Inverse Reinforcement Learning

    CERN Document Server

    Ratia, Héctor; Martinez-Cantin, Ruben

    2012-01-01

    Inverse reinforcement learning (IRL) addresses the problem of recovering a task description given a demonstration of the optimal policy used to solve such a task. The optimal policy is usually provided by an expert or teacher, making IRL specially suitable for the problem of apprenticeship learning. The task description is encoded in the form of a reward function of a Markov decision process (MDP). Several algorithms have been proposed to find the reward function corresponding to a set of demonstrations. One of the algorithms that has provided best results in different applications is a gradient method to optimize a policy squared error criterion. On a parallel line of research, other authors have presented recently a gradient approximation of the maximum likelihood estimate of the reward signal. In general, both approaches approximate the gradient estimate and the criteria at different stages to make the algorithm tractable and efficient. In this work, we provide a detailed description of the different metho...

  18. Empirical likelihood for balanced ranked-set sampled data

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Ranked-set sampling(RSS) often provides more efficient inference than simple random sampling(SRS).In this article,we propose a systematic nonparametric technique,RSS-EL,for hypoth-esis testing and interval estimation with balanced RSS data using empirical likelihood(EL).We detail the approach for interval estimation and hypothesis testing in one-sample and two-sample problems and general estimating equations.In all three cases,RSS is shown to provide more efficient inference than SRS of the same size.Moreover,the RSS-EL method does not require any easily violated assumptions needed by existing rank-based nonparametric methods for RSS data,such as perfect ranking,identical ranking scheme in two groups,and location shift between two population distributions.The merit of the RSS-EL method is also demonstrated through simulation studies.

  19. Maximum likelihood identification of aircraft stability and control derivatives

    Science.gov (United States)

    Mehra, R. K.; Stepner, D. E.; Tyler, J. S.

    1974-01-01

    Application of a generalized identification method to flight test data analysis. The method is based on the maximum likelihood (ML) criterion and includes output error and equation error methods as special cases. Both the linear and nonlinear models with and without process noise are considered. The flight test data from lateral maneuvers of HL-10 and M2/F3 lifting bodies are processed to determine the lateral stability and control derivatives, instrumentation accuracies, and biases. A comparison is made between the results of the output error method and the ML method for M2/F3 data containing gusts. It is shown that better fits to time histories are obtained by using the ML method. The nonlinear model considered corresponds to the longitudinal equations of the X-22 VTOL aircraft. The data are obtained from a computer simulation and contain both process and measurement noise. The applicability of the ML method to nonlinear models with both process and measurement noise is demonstrated.

  20. Uncertainty under quantum measures and quantum memory

    Science.gov (United States)

    Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing

    2017-04-01

    The uncertainty principle restricts potential information one gains about physical properties of the measured particle. However, if the particle is prepared in entanglement with a quantum memory, the corresponding entropic uncertainty relation will vary. Based on the knowledge of correlations between the measured particle and quantum memory, we have investigated the entropic uncertainty relations for two and multiple measurements and generalized the lower bounds on the sum of Shannon entropies without quantum side information to those that allow quantum memory. In particular, we have obtained generalization of Kaniewski-Tomamichel-Wehner's bound for effective measures and majorization bounds for noneffective measures to allow quantum side information. Furthermore, we have derived several strong bounds for the entropic uncertainty relations in the presence of quantum memory for two and multiple measurements. Finally, potential applications of our results to entanglement witnesses are discussed via the entropic uncertainty relation in the absence of quantum memory.

  1. Parton Distribution Function Uncertainties

    CERN Document Server

    Giele, Walter T.; Kosower, David A.; Giele, Walter T.; Keller, Stephane A.; Kosower, David A.

    2001-01-01

    We present parton distribution functions which include a quantitative estimate of its uncertainties. The parton distribution functions are optimized with respect to deep inelastic proton data, expressing the uncertainties as a density measure over the functional space of parton distribution functions. This leads to a convenient method of propagating the parton distribution function uncertainties to new observables, now expressing the uncertainty as a density in the prediction of the observable. New measurements can easily be included in the optimized sets as added weight functions to the density measure. Using the optimized method nowhere in the analysis compromises have to be made with regard to the treatment of the uncertainties.

  2. Entry and exit decisions under uncertainty

    DEFF Research Database (Denmark)

    Kongsted, Hans Christian

    1996-01-01

    This paper establishes the general deterministic limit that corresponds to Dixit's model of entry and exit decisions under uncertainty. The interlinked nature of decisions is shown to be essential also in the deterministic limit. A numerical example illustrates the result......This paper establishes the general deterministic limit that corresponds to Dixit's model of entry and exit decisions under uncertainty. The interlinked nature of decisions is shown to be essential also in the deterministic limit. A numerical example illustrates the result...

  3. Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters

    CERN Document Server

    Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chiang, H.C.; Christensen, P.R.; Clements, D.L.; Colombo, L.P.L.; Combet, C.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Desert, F.X.; Di Valentino, E.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Ducout, A.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Gerbino, M.; Giard, M.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hamann, J.; Hansen, F.K.; Harrison, D.L.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Holmes, W.A.; Hornstrup, A.; Huffenberger, K.M.; Hurier, G.; Jaffe, A.H.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Lawrence, C.R.; Le Jeune, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P.B.; Lilley, M.; Linden-Vornle, M.; Lindholm, V.; Lopez-Caniego, M.; Macias-Perez, J.F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Meinhold, P.R.; Melchiorri, A.; Migliaccio, M.; Millea, M.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J.A.; Narimani, A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T.J.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G.W.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; d'Orfeuil, B.Rouille; Rubino-Martin, J.A.; Rusholme, B.; Salvati, L.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Serra, P.; Spencer, L.D.; Spinelli, M.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2015-01-01

    This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of CMB temperature and polarization. They use the hybrid approach employed previously: pixel-based at low multipoles, $\\ell$, and a Gaussian approximation to the distribution of cross-power spectra at higher $\\ell$. The main improvements are the use of more and better processed data and of Planck polarization data, and more detailed foreground and instrumental models. More than doubling the data allows further checks and enhanced immunity to systematics. Progress in foreground modelling enables a larger sky fraction, contributing to enhanced precision. Improvements in processing and instrumental models further reduce uncertainties. Extensive tests establish robustness and accuracy, from temperature, from polarization, and from their combination, and show that the {\\Lambda}CDM model continues to offer a very good fit. We further validate the likelihood against specific extensions to this baseline, suc...

  4. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  5. Uncertainty and Cognitive Control

    Directory of Open Access Journals (Sweden)

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  6. Uncertainty in flood risk mapping

    Science.gov (United States)

    Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo

    2014-05-01

    A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow

  7. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  8. Uncertainty Categorization, Modeling, and Management for Regional Water Supply Planning

    Science.gov (United States)

    Fletcher, S.; Strzepek, K. M.; AlSaati, A.; Alhassan, A.

    2016-12-01

    Many water planners face increased pressure on water supply systems from growing demands, variability in supply and a changing climate. Short-term variation in water availability and demand; long-term uncertainty in climate, groundwater storage, and sectoral competition for water; and varying stakeholder perspectives on the impacts of water shortages make it difficult to assess the necessity of expensive infrastructure investments. We categorize these uncertainties on two dimensions: whether they are the result of stochastic variation or epistemic uncertainty, and whether the uncertainties can be described probabilistically or are deep uncertainties whose likelihood is unknown. We develop a decision framework that combines simulation for probabilistic uncertainty, sensitivity analysis for deep uncertainty and Bayesian decision analysis for uncertainties that are reduced over time with additional information. We apply this framework to two contrasting case studies - drought preparedness in Melbourne, Australia and fossil groundwater depletion in Riyadh, Saudi Arabia - to assess the impacts of different types of uncertainty on infrastructure decisions. Melbourne's water supply system relies on surface water, which is impacted by natural variation in rainfall, and a market-based system for managing water rights. Our results show that small, flexible investment increases can mitigate shortage risk considerably at reduced cost. Riyadh, by contrast, relies primarily on desalination for municipal use and fossil groundwater for agriculture, and a centralized planner makes allocation decisions. Poor regional groundwater measurement makes it difficult to know when groundwater pumping will become uneconomical, resulting in epistemic uncertainty. However, collecting more data can reduce the uncertainty, suggesting the need for different uncertainty modeling and management strategies in Riyadh than in Melbourne. We will categorize the two systems and propose appropriate

  9. Analysis of parameter uncertainty in hydrological and sediment modeling using GLUE method: a case study of SWAT model applied to Three Gorges Reservoir Region, China

    Directory of Open Access Journals (Sweden)

    Z. Y. Shen

    2012-01-01

    Full Text Available The calibration of hydrologic models is a worldwide challenge due to the uncertainty involved in the large number of parameters. The difficulty even increases in a region with high seasonal variation of precipitation, where the results exhibit high heteroscedasticity and autocorrelation. In this study, the Generalized Likelihood Uncertainty Estimation (GLUE method was combined with the Soil and Water Assessment Tool (SWAT to quantify the parameter uncertainty of the stream flow and sediment simulation in the Daning River Watershed of the Three Gorges Reservoir Region (TGRA, China. Based on this study, only a few parameters affected the final simulation output significantly. The results showed that sediment simulation presented greater uncertainty than stream flow, and uncertainty was even greater in high precipitation conditions (from May to September than during the dry season. The main uncertainty sources of stream flow came from the catchment process while a channel process impacts the sediment simulation greatly. It should be noted that identifiable parameters such as CANMX, ALPHA_BNK, SOL_K could be obtained with an optimal parameter range using calibration method. However, equifinality was also observed in hydrologic modeling in TGRA. This study demonstrated that care must be taken when calibrating the SWAT model with non-identifiable parameters because these may lead to equifinality of the parameter values. It was anticipated this study would provide useful information for hydrology modeling related to policy development in the Three Gorges Reservoir Region (TGRA and other similar areas.

  10. Analysis of parameter uncertainty in hydrological modeling using GLUE method: a case study of SWAT model applied to Three Gorges Reservoir Region, China

    Directory of Open Access Journals (Sweden)

    Z. Y. Shen

    2011-08-01

    Full Text Available The calibration of hydrologic models is a worldwide difficulty due to the uncertainty involved in the large number of parameters. The difficulty even increases in the region with high seasonal variation of precipitation, where the results exhibit high heteroscedasticity and autocorrelation. In this study, the Generalized Likelihood Uncertainty Estimation (GLUE method was combined with Soil and Water Assessment Tool (SWAT to quantify the parameter uncertainty of the stream flow and sediment simulation in the Daning River Watershed of the Three Gorges Reservoir Region (TGRA, China. Based on this study, only a few parameters affected the final simulation output significantly. The results showed that sediment simulation presented greater uncertainty than stream flow, and uncertainty even increased in high precipitation condition than dry season. The main uncertainty sources of stream flow mainly came from the catchment process while channel process impacts the sediment simulation greatly. It should be noted that identifiable parameters such as CANMX, ALPHA_BNK, SOL_K could be obtained optimal parameter range using calibration method. However, equifinality was also observed in hydrologic modeling in TGRA. This paper demonstrated that care must be taken when calibrating the SWAT with non-identifiable parameters as these may lead to equifinality of the parameter values. It is anticipated this study would provide useful information for hydrology modeling related to policy development in the Three Gorges Reservoir Region (TGRA and other similar areas.

  11. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  12. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Martínez Santos, D.; Olive, K.A.; Richards, A.; de Vries, K.J.; Weiglein, G.

    2016-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  13. REDUCING THE LIKELIHOOD OF LONG TENNIS MATCHES

    Directory of Open Access Journals (Sweden)

    Tristan Barnett

    2006-12-01

    Full Text Available Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match

  14. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  15. Likelihood Analysis of Supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY; Costa, J. C. [Imperial Coll., London; Sakurai, K. [Warsaw U.; Borsato, M. [Santiago de Compostela U.; Buchmueller, O. [Imperial Coll., London; Cavanaugh, R. [Illinois U., Chicago; Chobanova, V. [Santiago de Compostela U.; Citron, M. [Imperial Coll., London; De Roeck, A. [Antwerp U.; Dolan, M. J. [Melbourne U.; Ellis, J. R. [King' s Coll. London; Flächer, H. [Bristol U.; Heinemeyer, S. [Madrid, IFT; Isidori, G. [Zurich U.; Lucio, M. [Santiago de Compostela U.; Martínez Santos, D. [Santiago de Compostela U.; Olive, K. A. [Minnesota U., Theor. Phys. Inst.; Richards, A. [Imperial Coll., London; de Vries, K. J. [Imperial Coll., London; Weiglein, G. [DESY

    2016-10-31

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \

  16. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  17. On the Loss of Information in Conditional Maximum Likelihood Estimation of Item Parameters.

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2000-01-01

    Shows that the concept of F-information, a generalization of Fisher information, is a useful took for evaluating the loss of information in conditional maximum likelihood (CML) estimation. With the F-information concept it is possible to investigate the conditions under which there is no loss of information in CML estimation and to quantify a loss…

  18. Automatic Optimism: The Affective Basis of Judgments about the Likelihood of Future Events

    Science.gov (United States)

    Lench, Heather C.

    2009-01-01

    People generally judge that the future will be consistent with their desires, but the reason for this desirability bias is unclear. This investigation examined whether affective reactions associated with future events are the mechanism through which desires influence likelihood judgments. In 4 studies, affective reactions were manipulated for…

  19. Relations between the likelihood ratios for 2D continuous and discrete time stochastic processes

    NARCIS (Netherlands)

    Luesink, Rob

    1991-01-01

    The author considers the likelihood ratio for 2D processes. In order to detect this ratio, it is necessary to compute the determinant of the covariance operator of the signal-plus-noise observation process. In the continuous case, this is in general a difficult problem. For cyclic processes, using F

  20. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  1. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  2. On penalized likelihood estimation for a non-proportional hazards regression model.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  3. A likelihood ratio test for species membership based on DNA sequence data

    DEFF Research Database (Denmark)

    Matz, Mikhail V.; Nielsen, Rasmus

    2005-01-01

    DNA barcoding as an approach for species identification is rapidly increasing in popularity. However, it remains unclear which statistical procedures should accompany the technique to provide a measure of uncertainty. Here we describe a likelihood ratio test which can be used to test if a sampled...... sequence is a member of an a priori specified species. We investigate the performance of the test using coalescence simulations, as well as using the real data from butterflies and frogs representing two kinds of challenge for DNA barcoding: extremely low and extremely high levels of sequence variability....

  4. Likelihood Analysis of the Local Group Acceleration

    CERN Document Server

    Schmoldt, I M; Teodoro, L; Efstathiou, G P; Frenk, C S; Keeble, O; Maddox, S J; Oliver, S; Rowan-Robinson, M; Saunders, W J; Sutherland, W; Tadros, H; White, S D M

    1999-01-01

    We compute the acceleration on the Local Group using 11206 IRAS galaxies from the recently completed all-sky PSCz redshift survey. Measuring the acceleration vector in redshift space generates systematic uncertainties due to the redshift space distortions in the density field. We therefore assign galaxies to their real space positions by adopting a non-parametric model for the velocity field that solely relies on the linear gravitational instability and linear biasing hypotheses. Remaining systematic contributions to the measured acceleration vector are corrected for by using PSCz mock catalogues from N-body experiments. The resulting acceleration vector points approx. 15 degrees away from the CMB dipole apex, with a remarkable alignment between small and large scale contributions. A considerable fraction of the measured acceleration is generated within 40 h-1 Mpc with a non-negligible contribution from scales between 90 and 140 h-1 Mpc after which the acceleration amplitude seems to have converged. The local...

  5. Evaluation of parameter uncertainties obtained from in-situ tracer experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sawada, Atsushi; Yoshino, Naoto [Japan Nuclear Cycle Development Inst., Tokai Works, Tokai, Ibaraki (Japan); Ijiri, Yuji; Hata, Akihito [Taisei Corp., Tokyo (Japan); Hosono, Kenichi [Geoscience Research Laboratory, Yamato, Kanagawa (Japan)

    2003-03-01

    Radionuclide transport parameter uncertainty is an important consideration in the safety assessment of high-level radioactive waste disposal. This paper describes the development of a method for the quantitative estimation of transport parameter uncertainties from in-situ tracer experiments. The method utilizes a probabilistic inversion based on the maximum likelihood method. Transport parameters and their uncertainties are derived from a series of conservative and reactive tracer tests conducted in a single fracture at the Aespoe Hard Rock Laboratory in Sweden. These transport parameters and uncertainties are useful for evaluating the influence of parameter uncertainty on safety assessment. (author)

  6. Analysis of keff Uncertainty on Nuclear Data of CEFR

    Institute of Scientific and Technical Information of China (English)

    YANG; Jun; YU; Hong; XU; Li; HU; Yun

    2013-01-01

    The uncertainty to nuclear data is an important part of the total uncertainty of keff calculation value.In order to analyze the nuclear data uncertainty of keff for CEFR,a general expression of sensitivity to an integral quantity was derived based on the generalize perturbation theory(GPT).Using this general expression,the specific calculation formula of the sensitivity of keff was given,and the multi-group

  7. Robertson-Schrödinger formulation of Ozawa's uncertainty principle

    Science.gov (United States)

    Bastos, Catarina; Bernardini, Alex E.; Bertolami, Orfeu; Costa Dias, Nuno; Nuno Prata, João

    2015-07-01

    A more general measurement disturbance uncertainty principle is presented in a Robertson-Schrödinger formulation. It is shown that it is stronger and having nicer properties than Ozawa's uncertainty relations. In particular it is invariant under symplectic transformations. One shows also that there are states of the probe (measuring device) that saturate the matrix formulation of measurement disturbance uncertainty principle.

  8. Robertson-Schr\\"odinger formulation of Ozawa's Uncertainty Principle

    CERN Document Server

    Bastos, Catarina; Bertolami, O; Dias, N C; Prata, J N

    2014-01-01

    A more general measurement disturbance uncertainty principle is presented in a Robertson-Schr\\"odinger formulation. It is shown that it is stronger and having nicer properties than Ozawa's uncertainty relations. In particular is invariant under symplectic transformations. One shows also that there are states of the probe (measuring device) that saturate the matrix formulation of measurement disturbance uncertainty principle.

  9. Heisenberg's uncertainty principle

    OpenAIRE

    Busch, Paul; Heinonen, Teiko; Lahti, Pekka

    2007-01-01

    Heisenberg's uncertainty principle is usually taken to express a limitation of operational possibilities imposed by quantum mechanics. Here we demonstrate that the full content of this principle also includes its positive role as a condition ensuring that mutually exclusive experimental options can be reconciled if an appropriate trade-off is accepted. The uncertainty principle is shown to appear in three manifestations, in the form of uncertainty relations: for the widths of the position and...

  10. Uncertainty relations, zero point energy and the linear canonical group

    Science.gov (United States)

    Sudarshan, E. C. G.

    1993-01-01

    The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.

  11. Uncertainty in artificial intelligence

    CERN Document Server

    Kanal, LN

    1986-01-01

    How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.

  12. [Ethics, empiricism and uncertainty].

    Science.gov (United States)

    Porz, R; Zimmermann, H; Exadaktylos, A K

    2011-01-01

    Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  14. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  15. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...... d and b, and prove that they converge in distribution. We use the results to prove consistency of the maximum likelihood estimator for d,b in a large compact subset of {1/2...

  16. Estimating nonlinear dynamic equilibrium economies: a likelihood approach

    OpenAIRE

    2004-01-01

    This paper presents a framework to undertake likelihood-based inference in nonlinear dynamic equilibrium economies. The authors develop a sequential Monte Carlo algorithm that delivers an estimate of the likelihood function of the model using simulation methods. This likelihood can be used for parameter estimation and for model comparison. The algorithm can deal both with nonlinearities of the economy and with the presence of non-normal shocks. The authors show consistency of the estimate and...

  17. Likelihood ratios: Clinical application in day-to-day practice

    Directory of Open Access Journals (Sweden)

    Parikh Rajul

    2009-01-01

    Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.

  18. Empirical likelihood estimation of discretely sampled processes of OU type

    Institute of Scientific and Technical Information of China (English)

    SUN ShuGuang; ZHANG XinSheng

    2009-01-01

    This paper presents an empirical likelihood estimation procedure for parameters of the discretely sampled process of Ornstein-Uhlenbeck type. The proposed procedure is based on the condi-tional characteristic function, and the maximum empirical likelihood estimator is proved to be consistent and asymptotically normal. Moreover, this estimator is shown to be asymptotically efficient under some tensity parameter can be exactly recovered, and we study the maximum empirical likelihood estimator with the plug-in estimated intensity parameter. Testing procedures based on the empirical likelihood ratio statistic are developed for parameters and for estimating equations, respectively. Finally, Monte Carlo simulations are conducted to demonstrate the performance of proposed estimators.

  19. Empirical Likelihood based Confidence Regions for first order parameters of a heavy tailed distribution

    CERN Document Server

    Worms, Julien

    2010-01-01

    Let $X_1, \\ldots, X_n$ be some i.i.d. observations from a heavy tailed distribution $F$, i.e. such that the common distribution of the excesses over a high threshold $u_n$ can be approximated by a Generalized Pareto Distribution $G_{\\gamma,\\sigma_n}$ with $\\gamma >0$. This work is devoted to the problem of finding confidence regions for the couple $(\\gamma,\\sigma_n)$ : combining the empirical likelihood methodology with estimation equations (close but not identical to the likelihood equations) introduced by J. Zhang (Australian and New Zealand J. Stat n.49(1), 2007), asymptotically valid confidence regions for $(\\gamma,\\sigma_n)$ are obtained and proved to perform better than Wald-type confidence regions (especially those derived from the asymptotic normality of the maximum likelihood estimators). By profiling out the scale parameter, confidence intervals for the tail index are also derived.

  20. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  1. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  2. Feedback versus uncertainty

    NARCIS (Netherlands)

    Van Nooyen, R.R.P.; Hrachowitz, M.; Kolechkina, A.G.

    2014-01-01

    Even without uncertainty about the model structure or parameters, the output of a hydrological model run still contains several sources of uncertainty. These are: measurement errors affecting the input, the transition from continuous time and space to discrete time and space, which causes loss of in

  3. Chance and Uncertainty

    NARCIS (Netherlands)

    Capel, H.W.; Cramer, J.S.; Estevez-Uscanga, O.

    1995-01-01

    'Uncertainty and chance' is a subject with a broad span, in that there is no academic discipline or walk of life that is not beset by uncertainty and chance. In this book a range of approaches is represented by authors from varied disciplines: natural sciences, mathematics, social sciences and medic

  4. Guide for Uncertainty Communication

    NARCIS (Netherlands)

    Wardekker, J.A.|info:eu-repo/dai/nl/306644398; Kloprogge, P.|info:eu-repo/dai/nl/306644312; Petersen, A.C.; Janssen, P.H.M.; van der Sluijs, J.P.|info:eu-repo/dai/nl/073427489

    2013-01-01

    Dealing with uncertainty, in terms of analysis and communication, is an important and distinct topic for PBL Netherlands Environmental Assessment Agency. Without paying adequate attention to the role and implications of uncertainty, research and assessment results may be of limited value and could

  5. Computing with Epistemic Uncertainty

    Science.gov (United States)

    2015-01-01

    modified the input uncertainties in any way. And by avoiding the need for simulation, various assumptions and selection of specific sampling...strategies that may affect results are also avoided . According with the Principle of Maximum Uncertainty , epistemic intervals represent the highest input...

  6. Uncertainty theory. 2. ed.

    Energy Technology Data Exchange (ETDEWEB)

    Liu Baoding [Tsinghua Univ., Beijing (China). Uncertainty Theory Lab.

    2007-07-01

    Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, and countable subadditivity axioms. The goal of uncertainty theory is to study the behavior of uncertain phenomena such as fuzziness and randomness. The main topics include probability theory, credibility theory, and chance theory. For this new edition the entire text has been totally rewritten. More importantly, the chapters on chance theory and uncertainty theory are completely new. This book provides a self-contained, comprehensive and up-to-date presentation of uncertainty theory. The purpose is to equip the readers with an axiomatic approach to deal with uncertainty. Mathematicians, researchers, engineers, designers, and students in the field of mathematics, information science, operations research, industrial engineering, computer science, artificial intelligence, and management science will find this work a stimulating and useful reference. (orig.)

  7. Economic uncertainty and econophysics

    Science.gov (United States)

    Schinckus, Christophe

    2009-10-01

    The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.

  8. Physical Uncertainty Bounds (PUB)

    Energy Technology Data Exchange (ETDEWEB)

    Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  9. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  10. The Black Hole Uncertainty Principle Correspondence

    CERN Document Server

    Carr, B J

    2014-01-01

    The Black Hole Uncertainty Principle correspondence proposes a connection between the Uncertainty Principle on microscopic scales and black holes on macroscopic scales. This is manifested in a unified expression for the Compton wavelength and Schwarzschild radius. It is a natural consequence of the Generalized Uncertainty Principle, which suggests corrections to the Uncertainty Principle as the energy increases towards the Planck value. It also entails corrections to the event horizon size as the black hole mass falls to the Planck value, leading to the concept of a Generalized Event Horizon. One implication of this is that there could be sub-Planckian black holes with a size of order their Compton wavelength. Loop quantum gravity suggests the existence of black holes with precisely this feature. The correspondence leads to a heuristic derivation of the black hole temperature and suggests how the Hawking formula is modified in the sub-Planckian regime.

  11. PIV uncertainty propagation

    Science.gov (United States)

    Sciacchitano, Andrea; Wieneke, Bernhard

    2016-08-01

    This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.

  12. Assessment of model uncertainty during the river export modelling of pesticides and transformation products

    Science.gov (United States)

    Gassmann, Matthias; Olsson, Oliver; Kümmerer, Klaus

    2013-04-01

    The modelling of organic pollutants in the environment is burdened by a load of uncertainties. Not only parameter values are uncertain but often also the mass and timing of pesticide application. By introducing transformation products (TPs) into modelling, further uncertainty coming from the dependence of these substances on their parent compounds and the introduction of new model parameters are likely. The purpose of this study was the investigation of the behaviour of a parsimonious catchment scale model for the assessment of river concentrations of the insecticide Chlorpyrifos (CP) and two of its TPs, Chlorpyrifos Oxon (CPO) and 3,5,6-trichloro-2-pyridinol (TCP) under the influence of uncertain input parameter values. Especially parameter uncertainty and pesticide application uncertainty were investigated by Global Sensitivity Analysis (GSA) and the Generalized Likelihood Uncertainty Estimation (GLUE) method, based on Monte-Carlo sampling. GSA revealed that half-lives and sorption parameters as well as half-lives and transformation parameters were correlated to each other. This means, that the concepts of modelling sorption and degradation/transformation were correlated. Thus, it may be difficult in modelling studies to optimize parameter values for these modules. Furthermore, we could show that erroneous pesticide application mass and timing were compensated during Monte-Carlo sampling by changing the half-life of CP. However, the introduction of TCP into the calculation of the objective function was able to enhance identifiability of pesticide application mass. The GLUE analysis showed that CP and TCP were modelled successfully, but CPO modelling failed with high uncertainty and insensitive parameters. We assumed a structural error of the model which was especially important for CPO assessment. This shows that there is the possibility that a chemical and some of its TPs can be modelled successfully by a specific model structure, but for other TPs, the model

  13. Stochastic and epistemic uncertainty propagation in LCA

    DEFF Research Database (Denmark)

    Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide

    2013-01-01

    or expert judgement (epistemic uncertainty). The possibility theory has been developed over the last decades to address this problem. The objective of this study is to present a methodology that combines probability and possibility theories to represent stochastic and epistemic uncertainties in a consistent...... of epistemic uncertainty representation using fuzzy intervals. The propagation methods used are the Monte Carlo analysis for probability distribution and an optimisation on alpha-cuts for fuzzy intervals. The proposed method (noted as Independent Random Set, IRS) generalizes the process of random sampling...

  14. Uncertainties in Safety Analysis. A literature review

    Energy Technology Data Exchange (ETDEWEB)

    Ekberg, C. [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Nuclear Chemistry

    1995-05-01

    The purpose of the presented work has been to give a short summary of the origins of many uncertainties arising in the designing and performance assessment of a repository for spent nuclear fuel. Some different methods to treat these uncertainties is also included. The methods and conclusions are in many cases general in the sense that they are applicable to many other disciplines where simulations are used. As a conclusion it may be noted that uncertainties of different origin have been discussed and debated, but one large group, e.g. computer simulations, where the methods to make a more explicit investigation exists, have not been investigated in a satisfying way. 50 refs.

  15. Uncertainty relation for mutual information

    Science.gov (United States)

    Schneeloch, James; Broadbent, Curtis J.; Howell, John C.

    2014-12-01

    We postulate the existence of a universal uncertainty relation between the quantum and classical mutual informations between pairs of quantum systems. Specifically, we propose that the sum of the classical mutual information, determined by two mutually unbiased pairs of observables, never exceeds the quantum mutual information. We call this the complementary-quantum correlation (CQC) relation and prove its validity for pure states, for states with one maximally mixed subsystem, and for all states when one measurement is minimally disturbing. We provide results of a Monte Carlo simulation suggesting that the CQC relation is generally valid. Importantly, we also show that the CQC relation represents an improvement to an entropic uncertainty principle in the presence of a quantum memory, and that it can be used to verify an achievable secret key rate in the quantum one-time pad cryptographic protocol.

  16. Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    GaoChunwen; XuJingzhen; RichardSinding-Larsen

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  17. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  18. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  19. A likelihood method to cross-calibrate air-shower detectors

    CERN Document Server

    Dembinski, H P; Mariş, I C; Roth, M; Veberič, D

    2016-01-01

    We present a detailed statistical treatment of the energy calibration of hybrid air-shower detectors, which combine a surface detector array and a fluorescence detector, to obtain an unbiased estimate of the calibration curve. The special features of calibration data from air showers prevent unbiased results, if a standard least-squares fit is applied to the problem. We develop a general maximum-likelihood approach, based on the detailed statistical model, to solve the problem. Our approach was developed for the Pierre Auger Observatory, but the applied principles are general and can be transferred to other air-shower experiments, even to the cross-calibration of other observables. Since our general likelihood function is expensive to compute, we derive two approximations with significantly smaller computational cost. In the recent years both have been used to calibrate data of the Pierre Auger Observatory. We demonstrate that these approximations introduce negligible bias when they are applied to simulated t...

  20. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  1. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    Science.gov (United States)

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  2. Possibilistic uncertainty analysis of a conceptual model of snowmelt runoff

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2010-03-01

    Full Text Available This study presents the analysis of predictive uncertainty of a conceptual type snowmelt runoff model. The method applied uses possibilistic rather than probabilistic calculus for the evaluation of predictive uncertainty. Possibility theory is an information theory meant to model uncertainties caused by imprecise or incomplete knowledge about a real system rather than by randomness. A snow dominated catchment in the Chilean Andes is used as case study. Predictive uncertainty arising from parameter uncertainties of the watershed model is assessed. Model performance is evaluated according to several criteria, in order to define the possibility distribution of the model representations. The likelihood of the simulated glacier mass balance and snow cover are used for further assessing model credibility. Possibility distributions of the discharge estimates and prediction uncertainty bounds are subsequently derived. The results of the study indicate that the use of additional information allows a reduction of predictive uncertainty. In particular, the assessment of the simulated glacier mass balance and snow cover helps to reduce the width of the uncertainty bounds without a significant increment in the number of unbounded observations.

  3. Optimal Universal Uncertainty Relations

    Science.gov (United States)

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  4. Treatment of Uncertainties in Probabilistic Tsunami Hazard

    Science.gov (United States)

    Thio, H. K.

    2012-12-01

    Over the last few years, we have developed a framework for developing probabilistic tsunami inundation maps, which includes comprehensive quantification of earthquake recurrence as well as uncertainties, and applied it to the development of a tsunami hazard map of California. The various uncertainties in tsunami source and propagation models are an integral part of a comprehensive probabilistic tsunami hazard analysis (PTHA), and often drive the hazard at low probability levels (i.e. long return periods). There is no unique manner in which uncertainties are included in the analysis although in general, we distinguish between "natural" or aleatory variability, such as slip distribution and event magnitude, and uncertainties due to an incomplete understanding of the behavior of the earth, called epistemic uncertainties, such as scaling relations and rupture segmentation. Aleatory uncertainties are typically included through integration over distribution functions based on regression analyses, whereas epistemic uncertainties are included using logic trees. We will discuss how the different uncertainties were included in our recent probabilistic tsunami inundation maps for California, and their relative importance on the final results. Including these uncertainties in offshore exceedance waveheights is straightforward, but the problem becomes more complicated once the non-linearity of near-shore propagation and inundation are encountered. By using the probabilistic off-shore waveheights as input level for the inundation models, the uncertainties up to that point can be included in the final maps. PTHA provides a consistent analysis of tsunami hazard and will become an important tool in diverse areas such as coastal engineering and land use planning. The inclusive nature of the analysis, where few assumptions are made a-priori as to which sources are significant, means that a single analysis can provide a comprehensive view of the hazard and its dominant sources

  5. Predicting streamflow response to fire-induced landcover change: implications of parameter uncertainty in the MIKE SHE model.

    Science.gov (United States)

    McMichael, Christine E; Hope, Allen S

    2007-08-01

    Fire is a primary agent of landcover transformation in California semi-arid shrubland watersheds, however few studies have examined the impacts of fire and post-fire succession on streamflow dynamics in these basins. While it may seem intuitive that larger fires will have a greater impact on streamflow response than smaller fires in these watersheds, the nature of these relationships has not been determined. The effects of fire size on seasonal and annual streamflow responses were investigated for a medium-sized basin in central California using a modified version of the MIKE SHE model which had been previously calibrated and tested for this watershed using the Generalized Likelihood Uncertainty Estimation methodology. Model simulations were made for two contrasting periods, wet and dry, in order to assess whether fire size effects varied with weather regime. Results indicated that seasonal and annual streamflow response increased nearly linearly with fire size in a given year under both regimes. Annual flow response was generally higher in wetter years for both weather regimes, however a clear trend was confounded by the effect of stand age. These results expand our understanding of the effects of fire size on hydrologic response in chaparral watersheds, but it is important to note that the majority of model predictions were largely indistinguishable from the predictive uncertainty associated with the calibrated model - a key finding that highlights the importance of analyzing hydrologic predictions for altered landcover conditions in the context of model uncertainty. Future work is needed to examine how alternative decisions (e.g., different likelihood measures) may influence GLUE-based MIKE SHE streamflow predictions following different size fires, and how the effect of fire size on streamflow varies with other factors such as fire location.

  6. Role of information theoretic uncertainty relations in quantum theory

    Energy Technology Data Exchange (ETDEWEB)

    Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz [FNSPE, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1 (Czech Republic); ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin (Germany); Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk [Department of Physics and Astronomy, University of Sussex, Falmer, Brighton, BN1 9QH (United Kingdom); Joo, Jaewoo, E-mail: j.joo@surrey.ac.uk [Advanced Technology Institute and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom)

    2015-04-15

    Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again, improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.

  7. ON THE LIKELIHOOD OF PLANET FORMATION IN CLOSE BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Jang-Condell, Hannah, E-mail: hjangcon@uwyo.edu [Department of Physics and Astronomy, University of Wyoming, 1000 East University, Department 3905, Laramie, WY 82071 (United States)

    2015-02-01

    To date, several exoplanets have been discovered orbiting stars with close binary companions (a ≲ 30 AU). The fact that planets can form in these dynamically challenging environments implies that planet formation must be a robust process. The initial protoplanetary disks in these systems from which planets must form should be tidally truncated to radii of a few AU, which indicates that the efficiency of planet formation must be high. Here, we examine the truncation of circumstellar protoplanetary disks in close binary systems, studying how the likelihood of planet formation is affected over a range of disk parameters. If the semimajor axis of the binary is too small or its eccentricity is too high, the disk will have too little mass for planet formation to occur. However, we find that the stars in the binary systems known to have planets should have once hosted circumstellar disks that were capable of supporting planet formation despite their truncation. We present a way to characterize the feasibility of planet formation based on binary orbital parameters such as stellar mass, companion mass, eccentricity, and semimajor axis. Using this measure, we can quantify the robustness of planet formation in close binaries and better understand the overall efficiency of planet formation in general.

  8. Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise.

    Science.gov (United States)

    Dessimoz, Christophe; Gil, Manuel

    2008-06-23

    The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML) on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA). Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity). Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.

  9. Uncertainty, rationality, and agency

    CERN Document Server

    Hoek, Wiebe van der

    2006-01-01

    Goes across 'classical' borderlines of disciplinesUnifies logic, game theory, and epistemics and studies them in an agent-settingCombines classical and novel approaches to uncertainty, rationality, and agency

  10. Introduction to uncertainty quantification

    CERN Document Server

    Sullivan, T J

    2015-01-01

    Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...

  11. Uncertainty in chemistry.

    Science.gov (United States)

    Menger, Fredric M

    2010-09-01

    It might come as a disappointment to some chemists, but just as there are uncertainties in physics and mathematics, there are some chemistry questions we may never know the answer to either, suggests Fredric M. Menger.

  12. Mechanics and uncertainty

    CERN Document Server

    Lemaire, Maurice

    2014-01-01

    Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.

  13. Chemical model reduction under uncertainty

    KAUST Repository

    Malpica Galassi, Riccardo

    2017-03-06

    A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.

  14. Quantifying uncertainty in LCA-modelling of waste management systems.

    Science.gov (United States)

    Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H

    2012-12-01

    Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

  15. Maximum Likelihood Estimation of the Identification Parameters and Its Correction

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.

  16. Connectivity graphs of uncertainty regions

    CERN Document Server

    Chambers, Erin; Lenchner, Jonathan; Sember, Jeff; Srinivasan, Venkatesh; Stege, Ulrike; Stolpner, Svetlana; Weibel, Christophe; Whitesides, Sue

    2010-01-01

    We study a generalization of the well known bottleneck spanning tree problem called "Best Case Connectivity with Uncertainty": Given a family of geometric regions, choose one point per region, such that the length of the longest edge in a spanning tree of a disc intersection graph is minimized. We show that this problem is NP-hard even for very simple scenarios such as line segments and squares. We also give exact and approximation algorithms for the case of line segments and unit discs respectively.

  17. Uncertainty Relation and Inseparability Criterion

    Science.gov (United States)

    Goswami, Ashutosh K.; Panigrahi, Prasanta K.

    2016-11-01

    We investigate the Peres-Horodecki positive partial transpose criterion in the context of conserved quantities and derive a condition of inseparability for a composite bipartite system depending only on the dimensions of its subsystems, which leads to a bi-linear entanglement witness for the two qubit system. A separability inequality using generalized Schrodinger-Robertson uncertainty relation taking suitable operators, has been derived, which proves to be stronger than the bi-linear entanglement witness operator. In the case of mixed density matrices, it identically distinguishes the separable and non separable Werner states.

  18. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available Resolution Test Objects, Graphic Arts Research Center, Rochester Institute of Tech- nology, Rochester, NY H208491977H20850. 10. ISO/IEC 17025:1999, ?General requirements for the competence of testing and calibration laboratories.? 11. I. Miller and J. E... stream_source_info goncalves_2006.pdf.txt stream_content_type text/plain stream_size 28641 Content-Encoding ISO-8859-1 stream_name goncalves_2006.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Estimating uncertainty...

  19. Quantum Uncertainty and Fundamental Interactions

    Directory of Open Access Journals (Sweden)

    Tosto S.

    2013-04-01

    Full Text Available The paper proposes a simplified theoretical approach to infer some essential concepts on the fundamental interactions between charged particles and their relative strengths at comparable energies by exploiting the quantum uncertainty only. The worth of the present approach relies on the way of obtaining the results, rather than on the results themselves: concepts today acknowledged as fingerprints of the electroweak and strong interactions appear indeed rooted in the same theoretical frame including also the basic principles of special and general relativity along with the gravity force.

  20. Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.

    Science.gov (United States)

    Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng

    2017-04-17

    added with Gaussian distributed noise. Meanwhile clinical breast ultrasound images are used to visually evaluate the effectiveness of the method. To examine the performance, comparison tests between the proposed RSBF and six state-of-the-art methods for ultrasound speckle removal are performed on simulated ultrasound images with various noise and speckle levels. The results of the proposed RSBF are satisfying since the Gaussian noise and the Rayleigh speckle are greatly suppressed. The proposed method can improve the SNRs of the enhanced images to nearly 15 and 13 dB compared with images corrupted by speckle as well as images contaminated by speckle and noise under various SNR levels, respectively. The RSBF is effective in enhancing edge while smoothing the speckle and noise in clinical ultrasound images. In the comparison experiments, the proposed method demonstrates its superiority in accuracy and robustness for denoising and edge preserving under various levels of noise and speckle in terms of visual quality as well as numeric metrics, such as peak signal to noise ratio, SNR and root mean squared error. The experimental results show that the proposed method is effective for removing the speckle and the background noise in ultrasound images. The main reason is that it performs a "detect and replace" two-step mechanism. The advantages of the proposed RBSF lie in two aspects. Firstly, each central pixel is classified as noise, speckle or noise-free texture according to the absolute difference between the target pixel and the reference median. Subsequently, the Rayleigh-maximum-likelihood filter and the bilateral filter are switched to eliminate speckle and noise, respectively, while the noise-free pixels are unaltered. Therefore, it is implemented with better accuracy and robustness than the traditional methods. Generally, these traits declare that the proposed RSBF would have significant clinical application.