WorldWideScience

Sample records for covariates stratified models

  1. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab; Huang, Jianhua Z.

    2012-01-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

  2. A Powerful Approach to Estimating Annotation-Stratified Genetic Covariance via GWAS Summary Statistics.

    Science.gov (United States)

    Lu, Qiongshi; Li, Boyang; Ou, Derek; Erlendsdottir, Margret; Powles, Ryan L; Jiang, Tony; Hu, Yiming; Chang, David; Jin, Chentian; Dai, Wei; He, Qidu; Liu, Zefeng; Mukherjee, Shubhabrata; Crane, Paul K; Zhao, Hongyu

    2017-12-07

    Despite the success of large-scale genome-wide association studies (GWASs) on complex traits, our understanding of their genetic architecture is far from complete. Jointly modeling multiple traits' genetic profiles has provided insights into the shared genetic basis of many complex traits. However, large-scale inference sets a high bar for both statistical power and biological interpretability. Here we introduce a principled framework to estimate annotation-stratified genetic covariance between traits using GWAS summary statistics. Through theoretical and numerical analyses, we demonstrate that our method provides accurate covariance estimates, thereby enabling researchers to dissect both the shared and distinct genetic architecture across traits to better understand their etiologies. Among 50 complex traits with publicly accessible GWAS summary statistics (N total ≈ 4.5 million), we identified more than 170 pairs with statistically significant genetic covariance. In particular, we found strong genetic covariance between late-onset Alzheimer disease (LOAD) and amyotrophic lateral sclerosis (ALS), two major neurodegenerative diseases, in single-nucleotide polymorphisms (SNPs) with high minor allele frequencies and in SNPs located in the predicted functional genome. Joint analysis of LOAD, ALS, and other traits highlights LOAD's correlation with cognitive traits and hints at an autoimmune component for ALS. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  3. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab

    2012-10-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.

  4. Properties of the endogenous post-stratified estimator using a random forests model

    Science.gov (United States)

    John Tipton; Jean Opsomer; Gretchen G. Moisen

    2012-01-01

    Post-stratification is used in survey statistics as a method to improve variance estimates. In traditional post-stratification methods, the variable on which the data is being stratified must be known at the population level. In many cases this is not possible, but it is possible to use a model to predict values using covariates, and then stratify on these predicted...

  5. Modeling Covariance Breakdowns in Multivariate GARCH

    OpenAIRE

    Jin, Xin; Maheu, John M

    2014-01-01

    This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and impl...

  6. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  7. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  8. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  9. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  10. Bayes Factor Covariance Testing in Item Response Models.

    Science.gov (United States)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  11. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    Science.gov (United States)

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  12. Simulation model of stratified thermal energy storage tank using finite difference method

    Science.gov (United States)

    Waluyo, Joko

    2016-06-01

    Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be

  13. Numerical simulation of stratified flows with different k-ε turbulence models

    International Nuclear Information System (INIS)

    Dagestad, S.

    1991-01-01

    The thesis comprises the numerical simulation of stratified flows with different k-ε models. When using the k-ε model, two equations are solved to describe the turbulence. The k-equation represents the turbulent kinetic energy of the turbulence and the ε-equation is the turbulent dissipation. Different k-ε models predict stratified flows differently. The standard k-ε model leads to higher turbulent mixing than the low-Reynolds model does. For lower Froude numbers, F 0 , this effect becomes enhanced. Buoyancy extension of the k-ε model also leads to less vertical mixing in cases with strong stratification. When the stratification increases, buoyancy-extension becomes larger influence. The turbulent Prandtl number effects have large impact on the transport of heat and the development of the flow. Two different formulae which express the turbulent Prandtl effects have been tested. For unstably stratified flows, the rapid mixing and three-dimensionality of the flow can in fact be computed using a k-ε model when buoyancy-extended is employed. The turbulent heat transfer and thus turbulent production in unstable stratified flows depends strongly upon the turbulent Prandtl number effect. The main conclusions are: Stable stratified flows should be computed with a buoyancy-extended low-Reynolds k-ε model; Unstable stratified flows should be computed with a buoyancy-extended standard k-ε model; The turbulent Prandtl number effects should be included in the computations; Buoyancy-extension has lead to more correct description of the physics for all of the investigated flows. 78 refs., 128 figs., 17 tabs

  14. Matérn-based nonstationary cross-covariance models for global processes

    KAUST Repository

    Jun, Mikyoung

    2014-01-01

    -covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters

  15. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  16. Matérn-based nonstationary cross-covariance models for global processes

    KAUST Repository

    Jun, Mikyoung

    2014-07-01

    Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.

  17. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  18. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  19. MC3D modelling of stratified explosion

    International Nuclear Information System (INIS)

    Picchi, S.; Berthoud, G.

    1999-01-01

    It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)

  20. MC3D modelling of stratified explosion

    Energy Technology Data Exchange (ETDEWEB)

    Picchi, S.; Berthoud, G. [DTP/SMTH/LM2, CEA, 38 - Grenoble (France)

    1999-07-01

    It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)

  1. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  2. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  3. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  4. Validity of covariance models for the analysis of geographical variation

    DEFF Research Database (Denmark)

    Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio

    2014-01-01

    1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained att...

  5. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak

    2017-01-01

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix

  6. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  7. Globally covering a-priori regional gravity covariance models

    Directory of Open Access Journals (Sweden)

    D. Arabelos

    2003-01-01

    Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`

  8. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  9. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    Science.gov (United States)

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  10. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  11. Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning

    1996-01-01

    In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...

  12. Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning

    In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalize...

  13. Modelling the Covariance Structure in Marginal Multivariate Count Models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Olivero, J.; Grande-Vega, M.

    2017-01-01

    The main goal of this article is to present a flexible statistical modelling framework to deal with multivariate count data along with longitudinal and repeated measures structures. The covariance structure for each response variable is defined in terms of a covariance link function combined...... be used to indicate whether there was statistical evidence of a decline in blue duikers and other species hunted during the study period. Determining whether observed drops in the number of animals hunted are indeed true is crucial to assess whether species depletion effects are taking place in exploited...... with a matrix linear predictor involving known matrices. In order to specify the joint covariance matrix for the multivariate response vector, the generalized Kronecker product is employed. We take into account the count nature of the data by means of the power dispersion function associated with the Poisson...

  14. Some remarks on estimating a covariance structure model from a sample correlation matrix

    OpenAIRE

    Maydeu Olivares, Alberto; Hernández Estrada, Adolfo

    2000-01-01

    A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...

  15. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    Science.gov (United States)

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  17. Bayes factor covariance testing in item response models

    NARCIS (Netherlands)

    Fox, J.P.; Mulder, J.; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  18. Bayes Factor Covariance Testing in Item Response Models

    NARCIS (Netherlands)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  19. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Stratified turbulent Bunsen flames : flame surface analysis and flame surface density modelling

    NARCIS (Netherlands)

    Ramaekers, W.J.S.; Oijen, van J.A.; Goey, de L.P.H.

    2012-01-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold

  1. A cautionary note on generalized linear models for covariance of unbalanced longitudinal data

    KAUST Repository

    Huang, Jianhua Z.

    2012-03-01

    Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.

  2. Theoretical study of evaporation heat transfer in horizontal microfin tubes: stratified flow model

    Energy Technology Data Exchange (ETDEWEB)

    Honda, H; Wang, Y S [Kyushu Univ., Inst. for Materials Chemistry and Engineering, Kasuga, Fukuoka (Japan)

    2004-08-01

    The stratified flow model of evaporation heat transfer in helically grooved, horizontal microfin tubes has been developed. The profile of stratified liquid was determined by a theoretical model previously developed for condensation in horizontal microfin tubes. For the region above the stratified liquid, the meniscus profile in the groove between adjacent fins was determined by a force balance between the gravity and surface tension forces. The thin film evaporation model was applied to predict heat transfer in the thin film region of the meniscus. Heat transfer through the stratified liquid was estimated by using an empirical correlation proposed by Mori et al. The theoretical predictions of the circumferential average heat transfer coefficient were compared with available experimental data for four tubes and three refrigerants. A good agreement was obtained for the region of Fr{sub 0}<2.5 as long as partial dry out of tube surface did not occur. (Author)

  3. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    Science.gov (United States)

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  4. Modeling the Conducting Stably-Stratified Layer of the Earth's Core

    Science.gov (United States)

    Petitdemange, L.; Philidet, J.; Gissinger, C.

    2017-12-01

    Observations of the Earth magnetic field as well as recent theoretical works tend to show that the Earth's outer liquid core is mostly comprised of a convective zone in which the Earth's magnetic field is generated - likely by dynamo action -, but also features a thin, stably stratified layer at the top of the core.We carry out direct numerical simulations by modeling this thin layer as an axisymmetric spherical Couette flow for a stably stratified fluid embedded in a dipolar magnetic field. The dynamo region is modeled by a conducting inner core rotating slightly faster than the insulating mantle due to magnetic torques acting on it, such that a weak differential rotation (low Rossby limit) can develop in the stably stratified layer.In the case of a non-stratified fluid, the combined action of the differential rotation and the magnetic field leads to the well known regime of `super-rotation', in which the fluid rotates faster than the inner core. Whereas in the classical case, this super-rotation is known to vanish in the magnetostrophic limit, we show here that the fluid stratification significantly extends the magnitude of the super-rotation, keeping this phenomenon relevant for the Earth core. Finally, we study how the shear layers generated by this new state might give birth to magnetohydrodynamic instabilities or waves impacting the secular variations or jerks of the Earth's magnetic field.

  5. Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

    OpenAIRE

    Liang, Yuli

    2015-01-01

    This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data....

  6. Promotion time cure rate model with nonparametric form of covariate effects.

    Science.gov (United States)

    Chen, Tianlei; Du, Pang

    2018-05-10

    Survival data with a cured portion are commonly seen in clinical trials. Motivated from a biological interpretation of cancer metastasis, promotion time cure model is a popular alternative to the mixture cure rate model for analyzing such data. The existing promotion cure models all assume a restrictive parametric form of covariate effects, which can be incorrectly specified especially at the exploratory stage. In this paper, we propose a nonparametric approach to modeling the covariate effects under the framework of promotion time cure model. The covariate effect function is estimated by smoothing splines via the optimization of a penalized profile likelihood. Point-wise interval estimates are also derived from the Bayesian interpretation of the penalized profile likelihood. Asymptotic convergence rates are established for the proposed estimates. Simulations show excellent performance of the proposed nonparametric method, which is then applied to a melanoma study. Copyright © 2018 John Wiley & Sons, Ltd.

  7. RANS Modeling of Stably Stratified Turbulent Boundary Layer Flows in OpenFOAM®

    Directory of Open Access Journals (Sweden)

    Wilson Jordan M.

    2015-01-01

    Full Text Available Quantifying mixing processes relating to the transport of heat, momentum, and scalar quantities of stably stratified turbulent geophysical flows remains a substantial task. In a stably stratified flow, such as the stable atmospheric boundary layer (SABL, buoyancy forces have a significant impact on the flow characteristics. This study investigates constant and stability-dependent turbulent Prandtl number (Prt formulations linking the turbulent viscosity (νt and diffusivity (κt for modeling applications of boundary layer flows. Numerical simulations of plane Couette flow and pressure-driven channel flow are performed using the Reynolds-averaged Navier-Stokes (RANS framework with the standard k-ε turbulence model. Results are compared with DNS data to evaluate model efficacy for predicting mean velocity and density fields. In channel flow simulations, a Prandtl number formulation for wall-bounded flows is introduced to alleviate overmixing of the mean density field. This research reveals that appropriate specification of Prt can improve predictions of stably stratified turbulent boundary layer flows.

  8. ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities

    International Nuclear Information System (INIS)

    Muir, D.W.

    1989-01-01

    File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities

  9. Generalized Extreme Value model with Cyclic Covariate Structure ...

    Indian Academy of Sciences (India)

    48

    enhances the estimation of the return period; however, its application is ...... Cohn T A and Lins H F 2005 Nature's style: Naturally trendy; GEOPHYSICAL ..... Final non-stationary GEV models with covariate structures shortlisted based on.

  10. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  11. Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)

    DEFF Research Database (Denmark)

    Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis

    We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...

  12. Improvements to TRAC models of condensing stratified flow. Pt. 1

    International Nuclear Information System (INIS)

    Zhang, Q.; Leslie, D.C.

    1991-12-01

    Direct contact condensation in stratified flow is an important phenomenon in LOCA analyses. In this report, the TRAC interfacial heat transfer model for stratified condensing flow has been assessed against the Bankoff experiments. A rectangular channel option has been added to the code to represent the experimental geometry. In almost all cases the TRAC heat transfer coefficient (HTC) over-predicts the condensation rates and in some cases it is so high that the predicted steam is sucked in from the normal outlet in order to conserve mass. Based on their cocurrent and countercurrent condensing flow experiments, Bankoff and his students (Lim 1981, Kim 1985) developed HTC models from the two cases. The replacement of the TRAC HTC with either of Bankoff's models greatly improves the predictions of condensation rates in the experiment with cocurrent condensing flow. However, the Bankoff HTC for countercurrent flow is preferable because it is based only on the local quantities rather than on the quantities averaged from the inlet. (author)

  13. Merons in a generally covariant model with Gursey term

    International Nuclear Information System (INIS)

    Akdeniz, K.G.; Smailagic, A.

    1982-10-01

    We study meron solutions of the generally covariant and Weyl invariant fermionic model with Gursey term. We find that, due to the presence of this term, merons can exist even without the cosmological constant. This is a new feature compared to previously studied models. (author)

  14. Modeling the Conditional Covariance between Stock and Bond Returns

    NARCIS (Netherlands)

    P. de Goeij (Peter); W.A. Marquering (Wessel)

    2002-01-01

    textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for

  15. Covariant, chirally symmetric, confining model of mesons

    International Nuclear Information System (INIS)

    Gross, F.; Milana, J.

    1991-01-01

    We introduce a new model of mesons as quark-antiquark bound states. The model is covariant, confining, and chirally symmetric. Our equations give an analytic solution for a zero-mass pseudoscalar bound state in the case of exact chiral symmetry, and also reduce to the familiar, highly successful nonrelativistic linear potential models in the limit of heavy-quark mass and lightly bound systems. In this fashion we are constructing a unified description of all the mesons from the π through the Υ. Numerical solutions for other cases are also presented

  16. A class of covariate-dependent spatiotemporal covariance functions

    Science.gov (United States)

    Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.

    2014-01-01

    In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199

  17. Covariance evaluation system

    International Nuclear Information System (INIS)

    Kawano, Toshihiko; Shibata, Keiichi.

    1997-09-01

    A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of 238 U reaction cross sections were calculated with this system. (author)

  18. Computational Fluid Dynamics model of stratified atmospheric boundary-layer flow

    DEFF Research Database (Denmark)

    Koblitz, Tilman; Bechmann, Andreas; Sogachev, Andrey

    2015-01-01

    For wind resource assessment, the wind industry is increasingly relying on computational fluid dynamics models of the neutrally stratified surface-layer. So far, physical processes that are important to the whole atmospheric boundary-layer, such as the Coriolis effect, buoyancy forces and heat...

  19. Robustness studies in covariance structure modeling - An overview and a meta-analysis

    NARCIS (Netherlands)

    Hoogland, Jeffrey J.; Boomsma, A

    In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the

  20. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

  1. Covariance matrices for nuclear cross sections derived from nuclear model calculations

    International Nuclear Information System (INIS)

    Smith, D. L.

    2005-01-01

    The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology

  2. Robust estimation for partially linear models with large-dimensional covariates.

    Science.gov (United States)

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  3. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    Science.gov (United States)

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  4. Forecasting Co-Volatilities via Factor Models with Asymmetry and Long Memory in Realized Covariance

    NARCIS (Netherlands)

    M. Asai (Manabu); M.J. McAleer (Michael)

    2014-01-01

    markdownabstract__Abstract__ Modelling covariance structures is known to suffer from the curse of dimensionality. In order to avoid this problem for forecasting, the authors propose a new factor multivariate stochastic volatility (fMSV) model for realized covariance measures that accommodates

  5. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  6. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  7. Robust entry guidance using linear covariance-based model predictive control

    Directory of Open Access Journals (Sweden)

    Jianjun Luo

    2017-02-01

    Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.

  8. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  9. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  10. Poincare covariance and κ-Minkowski spacetime

    International Nuclear Information System (INIS)

    Dabrowski, Ludwik; Piacitelli, Gherardo

    2011-01-01

    A fully Poincare covariant model is constructed as an extension of the κ-Minkowski spacetime. Covariance is implemented by a unitary representation of the Poincare group, and thus complies with the original Wigner approach to quantum symmetries. This provides yet another example (besides the DFR model), where Poincare covariance is realised a la Wigner in the presence of two characteristic dimensionful parameters: the light speed and the Planck length. In other words, a Doubly Special Relativity (DSR) framework may well be realised without deforming the meaning of 'Poincare covariance'. -- Highlights: → We construct a 4d model of noncommuting coordinates (quantum spacetime). → The coordinates are fully covariant under the undeformed Poincare group. → Covariance a la Wigner holds in presence of two dimensionful parameters. → Hence we are not forced to deform covariance (e.g. as quantum groups). → The underlying κ-Minkowski model is unphysical; covariantisation does not cure this.

  11. Simulations and cosmological inference: A statistical model for power spectra means and covariances

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles

    2008-01-01

    We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.

  12. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  13. Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling

    Science.gov (United States)

    Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.

    2012-12-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data

  14. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  15. One-stage individual participant data meta-analysis models: estimation of treatment-covariate interactions must avoid ecological bias by separating out within-trial and across-trial information.

    Science.gov (United States)

    Hua, Hairui; Burke, Danielle L; Crowther, Michael J; Ensor, Joie; Tudur Smith, Catrin; Riley, Richard D

    2017-02-28

    Stratified medicine utilizes individual-level covariates that are associated with a differential treatment effect, also known as treatment-covariate interactions. When multiple trials are available, meta-analysis is used to help detect true treatment-covariate interactions by combining their data. Meta-regression of trial-level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta-analyses are preferable to examine interactions utilizing individual-level information. However, one-stage IPD models are often wrongly specified, such that interactions are based on amalgamating within- and across-trial information. We compare, through simulations and an applied example, fixed-effect and random-effects models for a one-stage IPD meta-analysis of time-to-event data where the goal is to estimate a treatment-covariate interaction. We show that it is crucial to centre patient-level covariates by their mean value in each trial, in order to separate out within-trial and across-trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta-analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is -0.011 (95% CI: -0.019 to -0.003; p = 0.004), and thus highly significant, when amalgamating within-trial and across-trial information. However, when separating within-trial from across-trial information, the interaction is -0.007 (95% CI: -0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta-analysts should only use within-trial information to examine individual predictors of treatment effect and that one-stage IPD models should separate within-trial from across-trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd

  16. Covariant quantization of infinite spin particle models, and higher order gauge theories

    International Nuclear Information System (INIS)

    Edgren, Ludde; Marnelius, Robert

    2006-01-01

    Further properties of a recently proposed higher order infinite spin particle model are derived. Infinitely many classically equivalent but different Hamiltonian formulations are shown to exist. This leads to a condition of uniqueness in the quantization process. A consistent covariant quantization is shown to exist. Also a recently proposed supersymmetric version for half-odd integer spins is quantized. A general algorithm to derive gauge invariances of higher order Lagrangians is given and applied to the infinite spin particle model, and to a new higher order model for a spinning particle which is proposed here, as well as to a previously given higher order rigid particle model. The latter two models are also covariantly quantized

  17. A reduced covariant string model for the extrinsic string

    International Nuclear Information System (INIS)

    Botelho, L.C.L.

    1989-01-01

    It is studied a reduced covariant string model for the extrinsic string by using Polyakov's path integral formalism. On the basis of this reduced model it is suggested that the extrinsic string has its critical dimension given by 13. Additionally, it is calculated in a simple way Poliakov's renormalization group law for the string rigidity coupling constants. (A.C.A.S.) [pt

  18. Parametric Covariance Model for Horizon-Based Optical Navigation

    Science.gov (United States)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  19. Are your covariates under control? How normalization can re-introduce covariate effects.

    Science.gov (United States)

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  20. Testing Constancy of the Error Covariance Matrix in Vector Models against Parametric Alternatives using a Spectral Decomposition

    DEFF Research Database (Denmark)

    Yang, Yukay

    I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....

  1. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Evaluation of a Stratified National Breast Screening Program in the United Kingdom : An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D. Gareth R.; Astley, Sue; Payne, Katherine

    Objectives: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. Methods: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  3. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, E.; Donten, A.; Karssemeijer, N.; Gils, C. van; Evans, D.G.; Astley, S.; Payne, K.

    2017-01-01

    OBJECTIVES: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. METHODS: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  4. A multivariate multilevel Gaussian model with a mixed effects structure in the mean and covariance part.

    Science.gov (United States)

    Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel

    2014-05-20

    A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  6. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  7. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  8. Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data

    International Nuclear Information System (INIS)

    Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M

    2006-01-01

    The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We

  9. Experimental Validation of a Domestic Stratified Hot Water Tank Model in Modelica for Annual Performance Assessment

    DEFF Research Database (Denmark)

    Carmo, Carolina; Dumont, Olivier; Nielsen, Mads Pagh

    2015-01-01

    The use of stratified hot water tanks in solar energy systems - including ORC systems - as well as heat pump systems is paramount for a better performance of these systems. However, the availability of effective and reliable models to predict the annual performance of stratified hot water tanks...

  10. Measures to assess the prognostic ability of the stratified Cox proportional hazards model

    DEFF Research Database (Denmark)

    (Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne

    2009-01-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...

  11. Design of dry sand soil stratified sampler

    Science.gov (United States)

    Li, Erkang; Chen, Wei; Feng, Xiao; Liao, Hongbo; Liang, Xiaodong

    2018-04-01

    This paper presents a design of a stratified sampler for dry sand soil, which can be used for stratified sampling of loose sand under certain conditions. Our group designed the mechanical structure of a portable, single - person, dry sandy soil stratified sampler. We have set up a mathematical model for the sampler. It lays the foundation for further development of design research.

  12. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  13. On estimating cosmology-dependent covariance matrices

    International Nuclear Information System (INIS)

    Morrison, Christopher B.; Schneider, Michael D.

    2013-01-01

    We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys

  14. Existence and uniqueness of the maximum likelihood estimator for models with a Kronecker product covariance structure

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.

    2016-01-01

    This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator

  15. Implementing the Keele stratified care model for patients with low back pain: an observational impact study.

    Science.gov (United States)

    Bamford, Adrian; Nation, Andy; Durrell, Susie; Andronis, Lazaros; Rule, Ellen; McLeod, Hugh

    2017-02-03

    The Keele stratified care model for management of low back pain comprises use of the prognostic STarT Back Screening Tool to allocate patients into one of three risk-defined categories leading to associated risk-specific treatment pathways, such that high-risk patients receive enhanced treatment and more sessions than medium- and low-risk patients. The Keele model is associated with economic benefits and is being widely implemented. The objective was to assess the use of the stratified model following its introduction in an acute hospital physiotherapy department setting in Gloucestershire, England. Physiotherapists recorded data on 201 patients treated using the Keele model in two audits in 2013 and 2014. To assess whether implementation of the stratified model was associated with the anticipated range of treatment sessions, regression analysis of the audit data was used to determine whether high- or medium-risk patients received significantly more treatment sessions than low-risk patients. The analysis controlled for patient characteristics, year, physiotherapists' seniority and physiotherapist. To assess the physiotherapists' views on the usefulness of the stratified model, audit data on this were analysed using framework methods. To assess the potential economic consequences of introducing the stratified care model in Gloucestershire, published economic evaluation findings on back-related National Health Service (NHS) costs, quality-adjusted life years (QALYs) and societal productivity losses were applied to audit data on the proportion of patients by risk classification and estimates of local incidence. When the Keele model was implemented, patients received significantly more treatment sessions as the risk-rating increased, in line with the anticipated impact of targeted treatment pathways. Physiotherapists were largely positive about using the model. The potential annual impact of rolling out the model across Gloucestershire is a gain in approximately 30

  16. The Misspecification of the Covariance Structures in Multilevel Models for Single-Case Data: A Monte Carlo Simulation Study

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim

    2016-01-01

    The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…

  17. Covariation in Natural Causal Induction.

    Science.gov (United States)

    Cheng, Patricia W.; Novick, Laura R.

    1991-01-01

    Biases and models usually offered by cognitive and social psychology and by philosophy to explain causal induction are evaluated with respect to focal sets (contextually determined sets of events over which covariation is computed). A probabilistic contrast model is proposed as underlying covariation computation in natural causal induction. (SLD)

  18. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  20. Stratified flow model for convective condensation in an inclined tube

    International Nuclear Information System (INIS)

    Lips, Stéphane; Meyer, Josua P.

    2012-01-01

    Highlights: ► Convective condensation in an inclined tube is modelled. ► The heat transfer coefficient is the highest for about 20° below the horizontal. ► Capillary forces have a strong effect on the liquid–vapour interface shape. ► A good agreement between the model and the experimental results was observed. - Abstract: Experimental data are reported for condensation of R134a in an 8.38 mm inner diameter smooth tube in inclined orientations with a mass flux of 200 kg/m 2 s. Under these conditions, the flow is stratified and there is an optimum inclination angle, which leads to the highest heat transfer coefficient. There is a need for a model to better understand and predict the flow behaviour. In this paper, the state of the art of existing models of stratified two-phase flows in inclined tubes is presented, whereafter a new mechanistic model is proposed. The liquid–vapour distribution in the tube is determined by taking into account the gravitational and the capillary forces. The comparison between the experimental data and the model prediction showed a good agreement in terms of heat transfer coefficients and pressure drops. The effect of the interface curvature on the heat transfer coefficient has been quantified and has been found to be significant. The optimum inclination angle is due to a balance between an increase of the void fraction and an increase in the falling liquid film thickness when the tube is inclined downwards. The effect of the mass flux and the vapour quality on the optimum inclination angle has also been studied.

  1. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis.

    Science.gov (United States)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine

    2017-09-01

    To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright

  2. P2 : A random effects model with covariates for directed graphs

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.

    A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.

  3. Mathematical modeling of turbulent stratified flows. Application of liquid metal fast breeders

    Energy Technology Data Exchange (ETDEWEB)

    Villand, M; Grand, D [CEA-Service des Transferts Thermiques, Grenoble (France)

    1983-07-01

    Mathematical model of turbulent stratified flow was proposed under the following assumptions: Newtonian fluid; incompressible fluid; coupling between temperature and momentum fields according to Boussinesq approximation; two-dimensional invariance for translation or rotation; coordinates cartesian or curvilinear. Solutions obtained by the proposed method are presented.

  4. Statistical mechanics of learning orthogonal signals for general covariance models

    International Nuclear Information System (INIS)

    Hoyle, David C

    2010-01-01

    Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation

  5. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  6. Two-phase pressurized thermal shock investigations using a 3D two-fluid modeling of stratified flow with condensation

    International Nuclear Information System (INIS)

    Yao, W.; Coste, P.; Bestion, D.; Boucker, M.

    2003-01-01

    In this paper, a local 3D two-fluid model for a turbulent stratified flow with/without condensation, which can be used to predict two-phase pressurized thermal shock, is presented. A modified turbulent K- model is proposed with turbulence production induced by interfacial friction. A model of interfacial friction based on a interfacial sublayer concept and three interfacial heat transfer models, namely, a model based on the small eddies controlled surface renewal concept (HDM, Hughes and Duffey, 1991), a model based on the asymptotic behavior of the Eddy Viscosity (EVM), and a model based on the Interfacial Sublayer concept (ISM) are implemented into a preliminary version of the NEPTUNE code based on the 3D module of the CATHARE code. As a first step to apply the above models to predict the two-phase thermal shock, the models are evaluated by comparison of calculated profiles with several experiments: a turbulent air-water stratified flow without interfacial heat transfer; a turbulent steam-water stratified flow with condensation; turbulence induced by the impact of a water jet in a water pool. The prediction results agree well with the experimental data. In addition, the comparison of three interfacial heat transfer models shows that EVM and ISM gave better prediction results while HDM highly overestimated the interfacial heat transfers compared to the experimental data of a steam water stratified flow

  7. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    Science.gov (United States)

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)

    2017-05-30

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  9. Covariant diagrams for one-loop matching

    International Nuclear Information System (INIS)

    Zhang, Zhengkang

    2017-01-01

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  10. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    Science.gov (United States)

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  11. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    Science.gov (United States)

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  12. Earth Observing System Covariance Realism

    Science.gov (United States)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  13. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.

    2015-05-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  14. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.; Kleiber, William

    2015-01-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  15. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  16. Forecasting Covariance Matrices: A Mixed Frequency Approach

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...

  17. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  18. Meson form factors and covariant three-dimensional formulation of composite model

    International Nuclear Information System (INIS)

    Skachkov, N.B.; Solovtsov, I.L.

    1978-01-01

    An approach is developed which is applied in the framework of the relativistic quark model to obtain explicit expressions for meson form factors in terms of covariant wave functions of the two-quark system. These wave functions obey the two-particle quasipotential equation in which the relative motion of quarks is singled out in a covariant way. The exact form of the wave functions is found using the transition to the relativistic configurational representation with the help of the harmonic analysis on the Lorentz group instead of the usual Fourier expansion and then solving the relativistic difference equation thus obtained. The expressions found for form factors are transformed into the three-dimensional covariant form which is a direct geometrical relativistic generalization of analogous expressions of the nonrelativistic quantum mechanics and provides the decrease of the meson form factor by the Fsub(π)(t) approximately t -1 law as -t infinity, in the Coulomb field

  19. Covariance Bell inequalities

    Science.gov (United States)

    Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas

    2017-12-01

    We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.

  20. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  1. Smooth individual level covariates adjustment in disease mapping.

    Science.gov (United States)

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Real-time probabilistic covariance tracking with efficient model update.

    Science.gov (United States)

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  3. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali

    2014-01-01

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  4. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying

    2014-07-15

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  5. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  6. Ultracentrifuge separative power modeling with multivariate regression using covariance matrix

    International Nuclear Information System (INIS)

    Migliavacca, Elder

    2004-01-01

    In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)

  7. The error and covariance structures of the mean approach model of pooled cross-section and time series data

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1991-01-01

    This paper postulates the assumptions underlying the Mean Approach model and recasts the re-expressions of the normal equations of this model in partitioned matrices of covariances. These covariance structures have been analysed. (author). 16 refs

  8. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  9. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  10. Mass spectra and wave functions of meson systems and the covariant oscillator quark model as an expansion basis

    International Nuclear Information System (INIS)

    Oda, Ryuichi; Ishida, Shin; Wada, Hiroaki; Yamada, Kenji; Sekiguchi, Motoo

    1999-01-01

    We examine mass spectra and wave functions of the nn-bar, cc-bar and bb-bar meson systems within the framework of the covariant oscillator quark model with the boosted LS-coupling scheme. We solve nonperturbatively an eigenvalue problem for the squared-mass operator, which incorporates the four-dimensional color-Coulomb-type interaction, by taking a set of covariant oscillator wave functions as an expansion basis. We obtain mass spectra of these meson systems, which reproduce quite well their experimental behavior. The resultant manifestly covariant wave functions, which are applicable to analyses of various reaction phenomena, are given. Our results seem to suggest that the present model may be considered effectively as a covariant version of the nonrelativistic linear-plus-Coulomb potential quark model. (author)

  11. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw.

    Science.gov (United States)

    Baas, P; van de Wiel, B J H; van der Linden, S J A; Bosveld, F C

    2018-01-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a [Formula: see text] bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than [Formula: see text]. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.

  12. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw

    Science.gov (United States)

    Baas, P.; van de Wiel, B. J. H.; van der Linden, S. J. A.; Bosveld, F. C.

    2018-02-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a 1 m s^{-1} bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than - 30 W m^{-2}. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.

  13. Covariant diagrams for one-loop matching

    International Nuclear Information System (INIS)

    Zhang, Zhengkang

    2016-10-01

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  14. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-10-15

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  15. Evaluation of covariance in theoretical calculation of nuclear data

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki

    1981-01-01

    Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)

  16. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  18. Activities on covariance estimation in Japanese Nuclear Data Committee

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)

  19. Competing risks and time-dependent covariates

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Andersen, Per K

    2010-01-01

    Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates...

  20. Assessment of horizontal in-tube condensation models using MARS code. Part I: Stratified flow condensation

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong-Su [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Hong, Soon-Joon, E-mail: sjhong90@fnctech.com [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Park, Ju-Yeop; Seul, Kwang-Won [Korea Institute of Nuclear Safety, 19 Kuseong-dong, Yuseong-gu, Daejon (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer This study collected 11 horizontal in-tube condensation models for stratified flow. Black-Right-Pointing-Pointer This study assessed the predictive capability of the models for steam condensation. Black-Right-Pointing-Pointer Purdue-PCCS experiments were simulated using MARS code incorporated with models. Black-Right-Pointing-Pointer Cavallini et al. (2006) model predicts well the data for stratified flow condition. Black-Right-Pointing-Pointer Results of this study can be used to improve condensation model in RELAP5 or MARS. - Abstract: The accurate prediction of the horizontal in-tube condensation heat transfer is a primary concern in the optimum design and safety analysis of horizontal heat exchangers of passive safety systems such as the passive containment cooling system (PCCS), the emergency condenser system (ECS) and the passive auxiliary feed-water system (PAFS). It is essential to analyze and assess the predictive capability of the previous horizontal in-tube condensation models for each flow regime using various experimental data. This study assessed totally 11 condensation models for the stratified flow, one of the main flow regime encountered in the horizontal condenser, with the heat transfer data from the Purdue-PCCS experiment using the multi-dimensional analysis of reactor safety (MARS) code. From the assessments, it was found that the models by Akers and Rosson, Chato, Tandon et al., Sweeney and Chato, and Cavallini et al. (2002) under-predicted the data in the main condensation heat transfer region, on the contrary to this, the models by Rosson and Meyers, Jaster and Kosky, Fujii, Dobson and Chato, and Thome et al. similarly- or over-predicted the data, and especially, Cavallini et al. (2006) model shows good predictive capability for all test conditions. The results of this study can be used importantly to improve the condensation models in thermal hydraulic code, such as RELAP5 or MARS code.

  1. Simultaneous genetic analysis of longitudinal means and covariance structure in the simplex model using twin data

    NARCIS (Netherlands)

    Dolan, C.V.; Molenaar, P.C.M.; Boomsma, D.I.

    1991-01-01

    D. Soerbom's (1974, 1976) simplex model approach to simultaneous analysis of means and covariance structure was applied to analysis of means observed in a single group. The present approach to the simultaneous biometric analysis of covariance and mean structure is based on the testable assumption

  2. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  3. Influence of covariate distribution on the predictive performance of pharmacokinetic models in paediatric research

    Science.gov (United States)

    Piana, Chiara; Danhof, Meindert; Della Pasqua, Oscar

    2014-01-01

    Aims The accuracy of model-based predictions often reported in paediatric research has not been thoroughly characterized. The aim of this exercise is therefore to evaluate the role of covariate distributions when a pharmacokinetic model is used for simulation purposes. Methods Plasma concentrations of a hypothetical drug were simulated in a paediatric population using a pharmacokinetic model in which body weight was correlated with clearance and volume of distribution. Two subgroups of children were then selected from the overall population according to a typical study design, in which pre-specified body weight ranges (10–15 kg and 30–40 kg) were used as inclusion criteria. The simulated data sets were then analyzed using non-linear mixed effects modelling. Model performance was assessed by comparing the accuracy of AUC predictions obtained for each subgroup, based on the model derived from the overall population and by extrapolation of the model parameters across subgroups. Results Our findings show that systemic exposure as well as pharmacokinetic parameters cannot be accurately predicted from the pharmacokinetic model obtained from a population with a different covariate range from the one explored during model building. Predictions were accurate only when a model was used for prediction in a subgroup of the initial population. Conclusions In contrast to current practice, the use of pharmacokinetic modelling in children should be limited to interpolations within the range of values observed during model building. Furthermore, the covariate point estimate must be kept in the model even when predictions refer to a subset different from the original population. PMID:24433411

  4. Experimental determination and modelling of interface area concentration in horizontal stratified flow

    International Nuclear Information System (INIS)

    Junqua-Moullet, Alexandra

    2003-01-01

    This research thesis concerns the modelling and experimentation of biphasic liquid/gas flows (water/air) while using the two-fluid model, a six-equation model. The author first addresses the modelling of interfacial magnitudes for a known topology (problem of two-fluid model closure, closure relationships for some variables, equation for a given configuration). She reports the development of an equation system for interfacial magnitudes. The next parts deal with experiments and report the study of stratified flows in the THALC experiment, and more particularly the study of the interfacial area concentration and of the liquid velocities in such flows. Results are discussed, as well as their consistency

  5. Covariant single-hole optical potential

    International Nuclear Information System (INIS)

    Kam, J. de

    1982-01-01

    In this investigation a covariant optical potential model is constructed for scattering processes of mesons from nuclei in which the meson interacts repeatedly with one of the target nucleons. The nuclear binding interactions in the intermediate scattering state are consistently taken into account. In particular for pions and K - projectiles this is important in view of the strong energy dependence of the elementary projectile-nucleon amplitude. Furthermore, this optical potential satisfies unitarity and relativistic covariance. The starting point in our discussion is the three-body model for the optical potential. To obtain a practical covariant theory I formulate the three-body model as a relativistic quasi two-body problem. Expressions for the transition interactions and propagators in the quasi two-body equations are found by imposing the correct s-channel unitarity relations and by using dispersion integrals. This is done in such a way that the correct non-relativistic limit is obtained, avoiding clustering problems. Corrections to the quasi two-body treatment from the Pauli principle and the required ground-state exclusion are taken into account. The covariant equations that we arrive at are amenable to practical calculations. (orig.)

  6. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  7. Treatment Effects with Many Covariates and Heteroskedasticity

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.

    The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...

  8. Modelling of vapour explosion in stratified geometrie

    International Nuclear Information System (INIS)

    Picchi, St.

    1999-01-01

    When a hot liquid comes into contact with a colder volatile liquid, one can obtain in some conditions an explosive vaporization, told vapour explosion, whose consequences can be important on neighbouring structures. This explosion needs the intimate mixing and the fine fragmentation between the two liquids. In a stratified vapour explosion, these two liquids are initially superposed and separated by a vapor film. A triggering of the explosion can induce a propagation of this along the film. A study of experimental results and existent models has allowed to retain the following main points: - the explosion propagation is due to a pressure wave propagating through the medium; - the mixing is due to the development of Kelvin-Helmholtz instabilities induced by the shear velocity between the two liquids behind the pressure wave. The presence of the vapour in the volatile liquid explains experimental propagation velocity and the velocity difference between the two fluids at the pressure wave crossing. A first model has been proposed by Brayer in 1994 in order to describe the fragmentation and the mixing of the two fluids. Results of the author do not show explosion propagation. We have therefore built a new mixing-fragmentation model based on the atomization phenomenon that develops itself during the pressure wave crossing. We have also taken into account the transient aspect of the heat transfer between fuel drops and the volatile liquid, and elaborated a model of transient heat transfer. These two models have been introduced in a multi-components, thermal, hydraulic code, MC3D. Results of calculation show a qualitative and quantitative agreement with experimental results and confirm basic options of the model. (author)

  9. Introduction to covariant formulation of superstring (field) theory

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    The author discusses covariant formulation of superstring theories based on BRS invariance. New formulation of superstring was constructed by Green and Schwarz in the light-cone gauge first and then a covariant action was discovered. The covariant action has some interesting geometrical interpretation, however, covariant quantizations are difficult to perform because of existence of local supersymmetries. Introducing extra variables into the action, a modified action has been proposed. However, it would be difficult to prescribe constraints to define a physical subspace, or to reproduce the correct physical spectrum. Hence the old formulation, i.e., the Neveu-Schwarz-Ramond (NSR) model for covariant quantization is used. The author begins by quantizing the NSR model in a covariant way using BRS charges. Then the author discusses the field theory of (free) superstring

  10. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    Science.gov (United States)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and

  11. Integrating lysimeter drainage and eddy covariance flux measurements in a groundwater recharge model

    DEFF Research Database (Denmark)

    Vasquez, Vicente; Thomsen, Anton Gårde; Iversen, Bo Vangsø

    2015-01-01

    Field scale water balance is difficult to characterize because controls exerted by soils and vegetation are mostly inferred from local scale measurements with relatively small support volumes. Eddy covariance flux and lysimeters have been used to infer and evaluate field scale water balances...... because they have larger footprint areas than local soil moisture measurements.. This study quantifies heterogeneity of soil deep drainage (D) in four 12.5 m2 repacked lysimeters, compares evapotranspiration from eddy covariance (ETEC) and mass balance residuals of lysimeters (ETwbLys), and models D...

  12. Bayesian nonparametric generative models for causal inference with missing at random covariates.

    Science.gov (United States)

    Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J

    2018-03-26

    We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.

  13. Development of covariance date for fast reactor cores. 3

    International Nuclear Information System (INIS)

    Shibata, Keiichi; Hasegawa, Akira

    1999-03-01

    Covariances have been estimated for nuclear data contained in JENDL-3.2. As for Cr and Ni, the physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. In a case where evaluated data were based on experimental data, the covariances were estimated from the same experimental data. For cross section that had been evaluated by nuclear model calculations, the same model was applied to generate the covariances. The covariances obtained were compiled into ENDF-6 format files. The covariances, which had been prepared by the previous fiscal year, were re-examined, and some improvements were performed. Parts of Fe and 235 U covariances were updated. Covariances of nu-p and nu-d for 241 Pu and of fission neutron spectra for 233,235,238 U and 239,240 Pu were newly added to data files. (author)

  14. Chiral phase transition in a covariant nonlocal NJL model

    International Nuclear Information System (INIS)

    General, I.; Scoccola, N.N.

    2001-01-01

    The properties of the chiral phase transition at finite temperature and chemical potential are investigated within a nonlocal covariant extension of the NJL model based on a separable quark-quark interaction. We find that for low values of T the chiral transition is always of first order and, for finite quark masses, at certain end point the transition turns into a smooth crossover. Our predictions for the position of this point is similar, although somewhat smaller, than previous estimates. (author)

  15. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  16. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  17. Covariance problem in two-dimensional quantum chromodynamics

    International Nuclear Information System (INIS)

    Hagen, C.R.

    1979-01-01

    The problem of covariance in the field theory of a two-dimensional non-Abelian gauge field is considered. Since earlier work has shown that covariance fails (in charged sectors) for the Schwinger model, particular attention is given to an evaluation of the role played by the non-Abelian nature of the fields. In contrast to all earlier attempts at this problem, it is found that the potential covariance-breaking terms are identical to those found in the Abelian theory provided that one expresses them in terms of the total (i.e., conserved) current operator. The question of covariance is thus seen to reduce in all cases to a determination as to whether there exists a conserved global charge in the theory. Since the charge operator in the Schwinger model is conserved only in neutral sectors, one is thereby led to infer a probable failure of covariance in the non-Abelian theory, but one which is identical to that found for the U(1) case

  18. MODELS OF COVARIANCE FUNCTIONS OF GAUSSIAN RANDOM FIELDS ESCAPING FROM ISOTROPY, STATIONARITY AND NON NEGATIVITY

    Directory of Open Access Journals (Sweden)

    Pablo Gregori

    2014-03-01

    Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.

  19. Model-driven development of covariances for spatiotemporal environmental health assessment.

    Science.gov (United States)

    Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George

    2013-01-01

    Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.

  20. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.

    2011-01-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models

  1. Covariance Function for Nearshore Wave Assimilation Systems

    Science.gov (United States)

    2018-01-30

    which is applicable for any spectral wave model. The four dimensional variational (4DVar) assimilation methods are based on the mathematical ...covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications , the covariance function depends primarily on...SPECTRAL ACTION DENSITY, RESPECTIVELY. ............................ 5 FIGURE 2. TOP ROW: STATISTICAL ANALYSIS OF THE WAVE-FIELD PROPERTIES AT THE

  2. Non-stationary covariance function modelling in 2D least-squares collocation

    Science.gov (United States)

    Darbeheshti, N.; Featherstone, W. E.

    2009-06-01

    Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.

  3. Emergent gravity on covariant quantum spaces in the IKKT model

    Energy Technology Data Exchange (ETDEWEB)

    Steinacker, Harold C. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Vienna (Austria)

    2016-12-30

    We study perturbations of 4-dimensional fuzzy spheres as backgrounds in the IKKT or IIB matrix model. Gauge fields and metric fluctuations are identified among the excitation modes with lowest spin, supplemented by a tower of higher-spin fields. They arise from an internal structure which can be viewed as a twisted bundle over S{sup 4}, leading to a covariant noncommutative geometry. The linearized 4-dimensional Einstein equations are obtained from the classical matrix model action under certain conditions, modified by an IR cutoff. Some one-loop contributions to the effective action are computed using the formalism of string states.

  4. Applications of Multidimensional Item Response Theory Models with Covariates to Longitudinal Test Data. Research Report. ETS RR-16-21

    Science.gov (United States)

    Fu, Jianbin

    2016-01-01

    The multidimensional item response theory (MIRT) models with covariates proposed by Haberman and implemented in the "mirt" program provide a flexible way to analyze data based on item response theory. In this report, we discuss applications of the MIRT models with covariates to longitudinal test data to measure skill differences at the…

  5. Stochastic modeling of the Earth's magnetic field: Inversion for covariances over the observatory era

    DEFF Research Database (Denmark)

    Gillet, N.; Jault, D.; Finlay, Chris

    2013-01-01

    Inferring the core dynamics responsible for the observed geomagnetic secular variation requires knowledge of the magnetic field at the core-mantle boundary together with its associated model covariances. However, most currently available field models have been built using regularization conditions...... variation error model in core flow inversions and geomagnetic data assimilation studies....

  6. lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals.

    Science.gov (United States)

    Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel

    2018-02-27

    Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .

  7. Graphical representation of covariant-contravariant modal formulae

    Directory of Open Access Journals (Sweden)

    Miguel Palomino

    2011-08-01

    Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.

  8. A New Approach for Nuclear Data Covariance and Sensitivity Generation

    International Nuclear Information System (INIS)

    Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.

    2005-01-01

    Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes

  9. Covariate-adjusted measures of discrimination for survival data

    DEFF Research Database (Denmark)

    White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth

    2015-01-01

    by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...

  10. Cross-covariance functions for multivariate random fields based on latent dimensions

    KAUST Repository

    Apanasovich, T. V.

    2010-02-16

    The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form. We focus on spatio-temporal cross-covariance functions that can be nonseparable, asymmetric and can have different covariance structures, for instance different smoothness parameters, in each component. We discuss estimation of these models and perform a small simulation study to demonstrate our approach. We illustrate our methodology on a trivariate spatio-temporal pollution dataset from California and demonstrate that our cross-covariance performs better than other competing models. © 2010 Biometrika Trust.

  11. Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library

    International Nuclear Information System (INIS)

    Oblozinsky, P.; Oblozinsky, P.; Mattoon, C.M.; Herman, M.; Mughabghab, S.F.; Pigni, M.T.; Talou, P.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Young, P.G

    2009-01-01

    Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10 -5 eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: 23 Na and 55 Mn where more detailed evaluation was done; improvements in major structural materials 52 Cr, 56 Fe and 58 Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for 23 Na and 56 Fe. LANL contributed improved covariance data for 235 U and 239 Pu including prompt neutron fission spectra and completely new evaluation for 240 Pu. New R-matrix evaluation for 16 O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.

  12. Evaluation of covariance for 238U cross sections

    International Nuclear Information System (INIS)

    Kawano, Toshihiko; Nakamura, Masahiro; Matsuda, Nobuyuki; Kanda, Yukinori

    1995-01-01

    Covariances of 238 U are generated using analytic functions for representation of the cross sections. The covariances of the (n,2n) and (n,3n) reactions are derived with a spline function, while the covariances of the total and the inelastic scattering cross section are estimated with a linearized nuclear model calculation. (author)

  13. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    Science.gov (United States)

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  14. Quark model with chiral-symmetry breaking and confinement in the Covariant Spectator Theory

    Energy Technology Data Exchange (ETDEWEB)

    Biernat, Elmer P. [CFTP, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Pena, Maria Teresa [CFTP, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Ribiero, Jose' Emilio F. [CeFEMA, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Stadler, Alfred [Departamento de Física, Universidade de Évora, 7000-671 Évora, Portugal; Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-03-01

    We propose a model for the quark-antiquark interaction in Minkowski space using the Covariant Spectator Theory. We show that with an equal-weighted scalar-pseudoscalar structure for the confining part of our interaction kernel the axial-vector Ward-Takahashi identity is preserved and our model complies with the Adler-zero constraint for pi-pi-scattering imposed by chiral symmetry.

  15. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    Science.gov (United States)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  16. Generalized Linear Covariance Analysis

    Science.gov (United States)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  17. The breaking of Bjorken scaling in the covariant parton model

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    Scale breaking is investigated in terms of a covariant parton model formulation of deep inelastic processes. It is shown that a consistent theory requires that the convergence properties of parton-hadron amplitudes should be modified as well as the parton being given form factors. Purely logarithmic violation is possible and the resulting model has many features in common with asymtotically free gauge theories. Behaviour at large and small ω and fixed q 2 is investigated. γW 2 should increase with q 2 at large ω and decrease with q 2 at small ω. Heuristic arguments are also given which suggest that the model would only lead to logarithmic modifications of dimensional counting results in purely hadronic deep scattering. (Auth.)

  18. A modified stratified model for the 3C 273 jet

    International Nuclear Information System (INIS)

    Liu Wenpo; Shen Zhiqiang

    2009-01-01

    We present a modified stratified jet model to interpret the observed spectral energy distributions of knots in the 3C 273 jet. Based on the hypothesis of the single index of the particle energy spectrum at injection and identical emission processes among all the knots, the observed difference of spectral shape among different 3C 273 knots can be understood as a manifestation of the deviation of the equivalent Doppler factor of stratified emission regions in an individual knot from a characteristic one. The summed spectral energy distributions of all ten knots in the 3C 273 jet can be well fitted by two components: a low-energy component (radio to optical) dominated by synchrotron radiation and a high-energy component (UV, X-ray and γ-ray) dominated by inverse Compton scattering of the cosmic microwave background. This gives a consistent spectral index of α = 0.88 (S v ∝ v -α ) and a characteristic Doppler factor of 7.4. Assuming the average of the summed spectrum as the characteristic spectrum of each knot in the 3C 273 jet, we further get a distribution of Doppler factors. We discuss the possible implications of these results for the physical properties in the 3C 273 jet. Future GeV observations with GLAST could separate the γ-ray emission of 3C 273 from the large scale jet and the small scale jet (i.e. the core) through measuring the GeV spectrum.

  19. Stochastic modelling of the Earth’s magnetic field: inversion for covariances over the observatory era

    DEFF Research Database (Denmark)

    Gillet, Nicolas; Jault, D.; Finlay, Chris

    2013-01-01

    Inferring the core dynamics responsible for the observed geomagnetic secular variation requires knowledge of the magnetic field at the core mantle boundary together with its associated model covariances. However, all currently available field models have been built using regularization conditions...... variation error model in core flow inversions and geomagnetic data assimilation studies....

  20. Covariant Spectator Theory of heavy–light and heavy mesons and the predictive power of covariant interaction kernels

    Energy Technology Data Exchange (ETDEWEB)

    Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2017-01-10

    The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.

  1. Contributions to Large Covariance and Inverse Covariance Matrices Estimation

    OpenAIRE

    Kang, Xiaoning

    2016-01-01

    Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...

  2. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    Mohamed, A.

    1998-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  3. Covariance of random stock prices in the Stochastic Dividend Discount Model

    OpenAIRE

    Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico

    2016-01-01

    Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.

  4. Meson form factors and covariant three-dimensional formulation of the composite model

    International Nuclear Information System (INIS)

    Skachkov, N.B.; Solovtsov, I.L.

    1979-01-01

    An apparatus is developed which allows within the relativistic quark model, to find explicit expressions for meson form factors in terms of the wave functions of two-quark system that obey the covariant two-particle quasipotential equation. The exact form of wave functions is obtained by passing to the relativistic configurational representation. As an example, the quark Coulomb interaction is considered

  5. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  6. Extended covariance data formats for the ENDF/B-VI differential data evaluation

    International Nuclear Information System (INIS)

    Peelle, R.W.; Muir, D.W.

    1988-01-01

    The ENDF/B-V included cross section covariance data, but covariances could not be encoded for all the important data types. New ENDF-6 covariance formats are outlined including those for cross-file (MF) covariances, resonance parameters over the whole range, and secondary energy and angle distributions. One ''late entry'' format encodes covariance data for cross sections that are output from model or fitting codes in terms of the model parameter covariance matrix and the tabulated derivatives of cross sections with respect to the model parameters. Another new format yields multigroup cross section variances that increase as the group width decreases. When evaluators use the new formats, the files can be processed and used for improved uncertainty propagation and data combination. 22 refs

  7. A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates

    OpenAIRE

    Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne

    2013-01-01

    The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implem...

  8. Covariant Transform

    OpenAIRE

    Kisil, Vladimir V.

    2010-01-01

    The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe...

  9. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  10. Horizontal stratified flow model for the 1-D module of WCOBRA/TRAC-TF2: modeling and validation

    Energy Technology Data Exchange (ETDEWEB)

    Liao, J.; Frepoli, C.; Ohkawa, K., E-mail: liaoj@westinghouse.com [Westinghouse Electric Company LLC, LOCA Integrated Services I, Cranberry Twp, Pennsylvania (United States)

    2011-07-01

    For a two-phase flow in a horizontal pipe, the individual phases may separate by gravity. This horizontal stratification significantly impacts the interfacial drag, interfacial heat transfer and wall drag of the two phase flow. For a PWR small break LOCA, the horizontal stratification in cold legs is a highly important phenomenon during loop seal clearance, boiloff and recovery periods. The low interfacial drag in the stratified flow directly controls the time period for the loop clearance and the level of residual water in the loop seal. Horizontal stratification in hot legs also impacts the natural circulation stage of a small break LOCA. In addition, the offtake phenomenon and cold leg condensation phenomenon are also affected by the occurrence of horizontal stratification in the cold legs. In the 1-D module of the WCOBRA/TRAC-TF2 computer code, a horizontal stratification criterion was developed by combining the Taitel-Dukler model and the Wallis-Dobson model, which approximates the viscous Kelvin-Helmholtz neutral stability boundary. The objective of this paper is to present the horizontal stratification model implemented in the code and its assessment against relevant data. The adequacy of the horizontal stratification transition criterion is confirmed by examining the code-predicted flow regime in a horizontal pipe with the measured data in the flow regime map. The void fractions (or liquid level) for the horizontal stratified flow in cold leg or hot leg are predicted with a reasonable accuracy. (author)

  11. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    Science.gov (United States)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve

  12. Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions

    Science.gov (United States)

    Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.

    2011-12-01

    Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.

  13. New theoretical model for two-phase flow discharged from stratified two-phase region through small break

    International Nuclear Information System (INIS)

    Yonomoto, Taisuke; Tasaka, Kanji

    1988-01-01

    A theoretical and experimental study was conducted to understand two-phase flow discharged from a stratified two-phase region through a small break. This problem is important for an analysis of a small break loss-of-coolant accident (LOCA) in a light water reactor (LWR). The present theoretical results show that a break quality is a function of h/h b , where h is the elevation difference between a bulk water level in the upstream region and break and b the suffix for entrainment initiation. This result is consistent with existing eperimental results in literature. An air-water experiment was also conducted changing a break orientation as an experimental parameter to develop and assess the model. Comparisons between the model and the experimental results show that the present model can satisfactorily predict the flow rate and the quality at the break without using any adjusting constant when liquid entrainment occurs in a stratified two-phase region. When gas entrainment occurs, the experimental data are correlated well by using a single empirical constant. (author)

  14. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  15. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    Science.gov (United States)

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  16. Semiparametric estimation of covariance matrices for longitudinal data.

    Science.gov (United States)

    Fan, Jianqing; Wu, Yichao

    2008-12-01

    Estimation of longitudinal data covariance structure poses significant challenges because the data are usually collected at irregular time points. A viable semiparametric model for covariance matrices was proposed in Fan, Huang and Li (2007) that allows one to estimate the variance function nonparametrically and to estimate the correlation function parametrically via aggregating information from irregular and sparse data points within each subject. However, the asymptotic properties of their quasi-maximum likelihood estimator (QMLE) of parameters in the covariance model are largely unknown. In the current work, we address this problem in the context of more general models for the conditional mean function including parametric, nonparametric, or semi-parametric. We also consider the possibility of rough mean regression function and introduce the difference-based method to reduce biases in the context of varying-coefficient partially linear mean regression models. This provides a more robust estimator of the covariance function under a wider range of situations. Under some technical conditions, consistency and asymptotic normality are obtained for the QMLE of the parameters in the correlation function. Simulation studies and a real data example are used to illustrate the proposed approach.

  17. Nonparametric Bayesian models for a spatial covariance.

    Science.gov (United States)

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  18. Remarks on Bousso's covariant entropy bound

    CERN Document Server

    Mayo, A E

    2002-01-01

    Bousso's covariant entropy bound is put to the test in the context of a non-singular cosmological solution of general relativity found by Bekenstein. Although the model complies with every assumption made in Bousso's original conjecture, the entropy bound is violated due to the occurrence of negative energy density associated with the interaction of some the matter components in the model. We demonstrate how this property allows for the test model to 'elude' a proof of Bousso's conjecture which was given recently by Flanagan, Marolf and Wald. This corroborates the view that the covariant entropy bound should be applied only to stable systems for which every matter component carries positive energy density.

  19. Stratifying Parkinson's Patients With STN-DBS Into High-Frequency or 60 Hz-Frequency Modulation Using a Computational Model.

    Science.gov (United States)

    Khojandi, Anahita; Shylo, Oleg; Mannini, Lucia; Kopell, Brian H; Ramdhani, Ritesh A

    2017-07-01

    High frequency stimulation (HFS) of the subthalamic nucleus (STN) is a well-established therapy for Parkinson's disease (PD), particularly the cardinal motor symptoms and levodopa induced motor complications. Recent studies have suggested the possible role of 60 Hz stimulation in STN-deep brain stimulation (DBS) for patients with gait disorder. The objective of this study was to develop a computational model, which stratifies patients a priori based on symptomatology into different frequency settings (i.e., high frequency or 60 Hz). We retrospectively analyzed preoperative MDS-Unified Parkinson's Disease Rating Scale III scores (32 indicators) collected from 20 PD patients implanted with STN-DBS at Mount Sinai Medical Center on either 60 Hz stimulation (ten patients) or HFS (130-185 Hz) (ten patients) for an average of 12 months. Predictive models using the Random Forest classification algorithm were built to associate patient/disease characteristics at surgery to the stimulation frequency. These models were evaluated objectively using leave-one-out cross-validation approach. The computational models produced, stratified patients into 60 Hz or HFS (130-185 Hz) with 95% accuracy. The best models relied on two or three predictors out of the 32 analyzed for classification. Across all predictors, gait and rest tremor of the right hand were consistently the most important. Computational models were developed using preoperative clinical indicators in PD patients treated with STN-DBS. These models were able to accurately stratify PD patients into 60 Hz stimulation or HFS (130-185 Hz) groups a priori, offering a unique potential to enhance the utilization of this therapy based on clinical subtypes. © 2017 International Neuromodulation Society.

  20. Maintainability analysis considering time-dependent and time-independent covariates

    International Nuclear Information System (INIS)

    Barabadi, Abbas; Barabady, Javad; Markeset, Tore

    2011-01-01

    Traditional parametric methods for assessing maintainability most often only consider time to repair (TTR) as a single explanatory variable. However, to predict availability more precisely for high availability systems, a better model is needed to quantify the effect of operational environment on maintainability. The proportional repair model (PRM), which is developed based on proportional hazard model (PHM), may be used to analyze maintainability in the present of covariates. In the PRM, the effect of covariates is considered to be time independent. However this assumption may not be valid for some situations. The aim of this paper is to develop the Cox regression model and its extension in the presence of time-dependent covariates for determining maintainability. A simple case study is used to demonstrate how the model can be applied in a real case.

  1. Precomputing Process Noise Covariance for Onboard Sequential Filters

    Science.gov (United States)

    Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell

    2017-01-01

    Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.

  2. Lagged PM2.5 effects in mortality time series: Critical impact of covariate model

    Science.gov (United States)

    The two most common approaches to modeling the effects of air pollution on mortality are the Harvard and the Johns Hopkins (NMMAPS) approaches. These two approaches, which use different sets of covariates, result in dissimilar estimates of the effect of lagged fine particulate ma...

  3. Prediction of stably stratified homogeneous shear flows with second-order turbulence models

    International Nuclear Information System (INIS)

    Pereira, J C F; Rocha, J M P

    2010-01-01

    The present study investigated the role of pressure-correlation second-order turbulence modelling schemes on the predicted behaviour of stably stratified homogeneous vertical-sheared turbulence. The pressure-correlation terms were modelled with a nonlinear formulation (Craft 1991), which was compared with a linear pressure-strain model and the 'isotropization of production' model for the pressure-scalar correlation. Two additional modelling issues were investigated: the influence of the buoyancy term in the kinetic energy dissipation rate equation and the time scale in the thermal production term in the scalar variance dissipation equation. The predicted effects of increasing the Richardson number on turbulence characteristics were compared against a comprehensive set of direct numerical simulation databases. The linear models provide a broadly satisfactory description of the major effects of the Richardson number on stratified shear flow. The buoyancy term in the dissipation equation of the turbulent kinetic energy generates excessively low levels of dissipation. For moderate and large Richardson numbers, the term yields unrealistic linear oscillations in the shear and buoyancy production terms, and therefore should be dropped in this flow (or at least their coefficient c ε3 should be substantially reduced from its standard value). The mechanical dissipation time scale provides marginal improvements in comparison to the scalar time scale in the production. The observed inaccuracy of the linear model in predicting the magnitude of the effects on the velocity anisotropy was demonstrated to be attributed mainly to the defective behaviour of the pressure-correlation model, especially for stronger stratification. The turbulence closure embodying a nonlinear formulation for the pressure-correlations and specific versions of the dissipation equations failed to predict the tendency of the flow to anisotropy with increasing stratification. By isolating the effects of the

  4. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  5. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    Science.gov (United States)

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  6. Extensive set of low-fidelity cross sections covariances in fast neutron region

    International Nuclear Information System (INIS)

    Pigni, M.T.; Herman, M.; Oblozinsky, P.

    2008-01-01

    We produced a large set of neutron cross section covariances in the energy range of 5 keV - 20 MeV. The covariance matrices were calculated for 307 isotopes divided into three major regions: structural materials, fission products, and heavy nuclei. These results have been developed to provide initial, but consistent estimates of covariance data for nuclear criticality safety applications. The methodology for the determination of such covariance matrices is presented. It combines the nuclear reaction model code EMPIRE which calculates sensitivity of cross sections to nuclear reaction model parameters, and the Bayesian code KALMAN that propagates uncertainties of the model parameters to cross sections. Taking into account large number of materials, only marginal reference to experimental data was made. The covariances were derived from the perturbation of several key model parameters selected by the sensitivity analysis. These parameters refer to the optical model potential, the level densities and the strength of the pre-equilibrium emission. This work represents the first try ever to generate nuclear data covariances on such a large scale. (authors)

  7. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  8. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2014-01-01

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  9. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  10. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  11. Model test on partial expansion in stratified subsidence during foundation pit dewatering

    Science.gov (United States)

    Wang, Jianxiu; Deng, Yansheng; Ma, Ruiqiang; Liu, Xiaotian; Guo, Qingfeng; Liu, Shaoli; Shao, Yule; Wu, Linbo; Zhou, Jie; Yang, Tianliang; Wang, Hanmei; Huang, Xinlei

    2018-02-01

    Partial expansion was observed in stratified subsidence during foundation pit dewatering. However, the phenomenon was suspected to be an error because the compression of layers is known to occur when subsidence occurs. A slice of the subsidence cone induced by drawdown was selected as the prototype. Model tests were performed to investigate the phenomenon. The underlying confined aquifer was generated as a movable rigid plate with a hinge at one end. The overlying layers were simulated with remolded materials collected from a construction site. Model tests performed under the conceptual model indicated that partial expansion occurred in stratified settlements under coordination deformation and consolidation conditions. During foundation pit dewatering, rapid drawdown resulted in rapid subsidence in the dewatered confined aquifer. The rapidly subsiding confined aquifer top was the bottom deformation boundary of the overlying layers. Non-coordination deformation was observed at the top and bottom of the subsiding overlying layers. The subsidence of overlying layers was larger at the bottom than at the top. The layers expanded and became thicker. The phenomenon was verified using numerical simulation method based on finite difference method. Compared with numerical simulation results, the boundary effect of the physical tests was obvious in the observation point close to the movable endpoint. The tensile stress of the overlying soil layers induced by the underlying settlement of dewatered confined aquifer contributed to the expansion phenomenon. The partial expansion of overlying soil layers was defined as inversed rebound. The inversed rebound was induced by inversed coordination deformation. Compression was induced by the consolidation in the overlying soil layers because of drainage. Partial expansion occurred when the expansion exceeded the compression. Considering the inversed rebound, traditional layer-wise summation method for calculating subsidence should be

  12. A three domain covariance framework for EEG/MEG data.

    Science.gov (United States)

    Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C

    2015-10-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. A scale invariant covariance structure on jet space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2005-01-01

    This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...

  14. On the algebraic structure of covariant anomalies and covariant Schwinger terms

    International Nuclear Information System (INIS)

    Kelnhofer, G.

    1992-01-01

    A cohomological characterization of covariant anomalies and covariant Schwinger terms in an anomalous Yang-Mills theory is formulated and w ill be geometrically interpreted. The BRS and anti-BRS transformations are defined as purely differential geometric objects. Finally the covariant descent equations are formulated within this context. (author)

  15. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    Science.gov (United States)

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  16. Modelling carbon fluxes of forest and grassland ecosystems in Western Europe using the CARAIB dynamic vegetation model: evaluation against eddy covariance data.

    Science.gov (United States)

    Henrot, Alexandra-Jane; François, Louis; Dury, Marie; Hambuckers, Alain; Jacquemin, Ingrid; Minet, Julien; Tychon, Bernard; Heinesch, Bernard; Horemans, Joanna; Deckmyn, Gaby

    2015-04-01

    Eddy covariance measurements are an essential resource to understand how ecosystem carbon fluxes react in response to climate change, and to help to evaluate and validate the performance of land surface and vegetation models at regional and global scale. In the framework of the MASC project (« Modelling and Assessing Surface Change impacts on Belgian and Western European climate »), vegetation dynamics and carbon fluxes of forest and grassland ecosystems simulated by the CARAIB dynamic vegetation model (Dury et al., iForest - Biogeosciences and Forestry, 4:82-99, 2011) are evaluated and validated by comparison of the model predictions with eddy covariance data. Here carbon fluxes (e.g. net ecosystem exchange (NEE), gross primary productivity (GPP), and ecosystem respiration (RECO)) and evapotranspiration (ET) simulated with the CARAIB model are compared with the fluxes measured at several eddy covariance flux tower sites in Belgium and Western Europe, chosen from the FLUXNET global network (http://fluxnet.ornl.gov/). CARAIB is forced either with surface atmospheric variables derived from the global CRU climatology, or with in situ meteorological data. Several tree (e.g. Pinus sylvestris, Fagus sylvatica, Picea abies) and grass species (e.g. Poaceae, Asteraceae) are simulated, depending on the species encountered on the studied sites. The aim of our work is to assess the model ability to reproduce the daily, seasonal and interannual variablility of carbon fluxes and the carbon dynamics of forest and grassland ecosystems in Belgium and Western Europe.

  17. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  18. Adaptive Non-Interventional Heuristics for Covariation Detection in Causal Induction: Model Comparison and Rational Analysis

    Science.gov (United States)

    Hattori, Masasi; Oaksford, Mike

    2007-01-01

    In this article, 41 models of covariation detection from 2 x 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in…

  19. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    Science.gov (United States)

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  1. Properties of Endogenous Post-Stratified Estimation using remote sensing data

    Science.gov (United States)

    John Tipton; Jean Opsomer; Gretchen Moisen

    2013-01-01

    Post-stratification is commonly used to improve the precision of survey estimates. In traditional poststratification methods, the stratification variable must be known at the population level. When suitable covariates are available at the population level, an alternative approach consists of fitting a model on the covariates, making predictions for the population and...

  2. Covariance and correlation estimation in electron-density maps.

    Science.gov (United States)

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  3. Super-Poincare covariant canonical formulation of superparticles and Green-Schwarz superstrings

    International Nuclear Information System (INIS)

    Nissimov, E.R.; Pacheva, S.J.

    1987-11-01

    First, a new unified covariant formulation simultaneously describing both superparticles and spinning particles is proposed. In this formulation both models emerge as different gauge fixings from a more general point-particle model with larger and gauge invariance. The general model possesses covariant and functionally independent first-class constraints only. Next, the above construction is generalized to the case of Green-Schwarz (GS) superstrings. This allows straightforward application of the Batalin-Fradkin-Vilkovisky (BFV) Becchi-Rouet-Stora-Tyutin (BRST) formalism for a manifestly super-Poincare covariant canonical quantization. The corresponding BRST charge turns out to be remarkably simple and is of rank one. It is used to construct a covariant BFV Hamiltonian for the GS superstring exhibiting explicit Parisi-Sourlas OSp(1,1/2) symmetry. (author). 21 refs

  4. Convex Banding of the Covariance Matrix.

    Science.gov (United States)

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  5. Visualization and assessment of spatio-temporal covariance properties

    KAUST Repository

    Huang, Huang

    2017-11-23

    Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.

  6. Brownian distance covariance

    OpenAIRE

    Székely, Gábor J.; Rizzo, Maria L.

    2010-01-01

    Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...

  7. Comparison of exact, efron and breslow parameter approach method on hazard ratio and stratified cox regression model

    Science.gov (United States)

    Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian

    2018-04-01

    Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.

  8. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  9. Covariant second-order perturbations in generalized two-field inflation

    International Nuclear Information System (INIS)

    Tzavara, Eleftheria; Tent, Bartjan van; Mizuno, Shuntaro

    2014-01-01

    We examine the covariant properties of generalized models of two-field inflation, with non-canonical kinetic terms and a possibly non-trivial field metric. We demonstrate that kinetic-term derivatives and covariant field derivatives do commute in a proper covariant framework, which was not realized before in the literature. We also define a set of generalized slow-roll parameters, using a unified notation. Within this framework, we study the most general class of models that allows for well-defined adiabatic and entropic sound speeds, which we identify as the models with parallel momentum and field velocity vectors. For these models we write the exact cubic action in terms of the adiabatic and isocurvature perturbations. We thus provide the tool to calculate the exact non-Gaussianity beyond slow-roll and at any scale for these generalized models. We illustrate our general results by considering their long-wavelength limit, as well as with the example of two-field DBI inflation

  10. Quality analysis applied on eddy covariance measurements at complex forest sites using footprint modelling

    Czech Academy of Sciences Publication Activity Database

    Rebmann, C.; Göckede, M.; Foken, T.; Aubinet, M.; Aurela, M.; Berbigier, P.; Bernhofer, C.; Buchmann, N.; Carrara, A.; Cescatti, A.; Ceulemans, R.; Clement, R.; Elbers, J. A.; Granier, A.; Grünwald, T.; Guyon, D.; Havránková, Kateřina; Heinesch, B.; Knohl, A.; Laurila, T.; Longdoz, B.; Marcolla, B.; Markkanen, T.; Miglietta, F.; Moncrieff, J.; Montagnani, L.; Moors, E.; Nardino, M.; Ourcival, J.-M.; Rambal, S.; Rannik, Ü.; Rotenberg, E.; Sedlák, Pavel; Unterhuber, G.; Vesala, T.; Yakir, D.

    2005-01-01

    Roč. 80, - (2005), s. 121-141 ISSN 0177-798X Grant - others:Carboeuroflux(XE) EVK-2-CT-1999-00032 Institutional research plan: CEZ:AV0Z30420517; CEZ:AV0Z6087904 Keywords : Eddy covariance * Quality assurance * Quality control * Footprint modelling * Heterogeneity Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.295, year: 2005

  11. Soil mixing of stratified contaminated sands.

    Science.gov (United States)

    Al-Tabba, A; Ayotamuno, M J; Martin, R J

    2000-02-01

    Validation of soil mixing for the treatment of contaminated ground is needed in a wide range of site conditions to widen the application of the technology and to understand the mechanisms involved. Since very limited work has been carried out in heterogeneous ground conditions, this paper investigates the effectiveness of soil mixing in stratified sands using laboratory-scale augers. This enabled a low cost investigation of factors such as grout type and form, auger design, installation procedure, mixing mode, curing period, thickness of soil layers and natural moisture content on the unconfined compressive strength, leachability and leachate pH of the soil-grout mixes. The results showed that the auger design plays a very important part in the mixing process in heterogeneous sands. The variability of the properties measured in the stratified soils and the measurable variations caused by the various factors considered, highlighted the importance of duplicating appropriate in situ conditions, the usefulness of laboratory-scale modelling of in situ conditions and the importance of modelling soil and contaminant heterogeneities at the treatability study stage.

  12. Covariant representations of nuclear *-algebras

    International Nuclear Information System (INIS)

    Moore, S.M.

    1978-01-01

    Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations

  13. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  14. Covariant effective action for loop quantum cosmology from order reduction

    International Nuclear Information System (INIS)

    Sotiriou, Thomas P.

    2009-01-01

    Loop quantum cosmology (LQC) seems to be predicting modified effective Friedmann equations without extra degrees of freedom. A puzzle arises if one decides to seek for a covariant effective action which would lead to the given Friedmann equation: The Einstein-Hilbert action is the only action that leads to second order field equations and, hence, there exists no covariant action which, under metric variation, leads to a modified Friedmann equation without extra degrees of freedom. It is shown that, at least for isotropic models in LQC, this issue is naturally resolved and a covariant effective action can be found if one considers higher order theories of gravity but faithfully follows effective field theory techniques. However, our analysis also raises doubts on whether a covariant description without background structures can be found for anisotropic models.

  15. Numerical modelling of disintegration of basin-scale internal waves in a tank filled with stratified water

    Directory of Open Access Journals (Sweden)

    N. Stashchuk

    2005-01-01

    Full Text Available We present the results of numerical experiments performed with the use of a fully non-linear non-hydrostatic numerical model to study the baroclinic response of a long narrow tank filled with stratified water to an initially tilted interface. Upon release, the system starts to oscillate with an eigen frequency corresponding to basin-scale baroclinic gravitational seiches. Field observations suggest that the disintegration of basin-scale internal waves into packets of solitary waves, shear instabilities, billows and spots of mixed water are important mechanisms for the transfer of energy within stratified lakes. Laboratory experiments performed by D. A. Horn, J. Imberger and G. N. Ivey (JFM, 2001 reproduced several regimes, which include damped linear waves and solitary waves. The generation of billows and shear instabilities induced by the basin-scale wave was, however, not sufficiently studied. The developed numerical model computes a variety of flows, which were not observed with the experimental set-up. In particular, the model results showed that under conditions of low dissipation, the regimes of billows and supercritical flows may transform into a solitary wave regime. The obtained results can help in the interpretation of numerous observations of mixing processes in real lakes.

  16. Rotational covariance and light-front current matrix elements

    International Nuclear Information System (INIS)

    Keister, B.D.

    1994-01-01

    Light-front current matrix elements for elastic scattering from hadrons with spin 1 or greater must satisfy a nontrivial constraint associated with the requirement of rotational covariance for the current operator. Using a model ρ meson as a prototype for hadronic quark models, this constraint and its implications are studied at both low and high momentum transfers. In the kinematic region appropriate for asymptotic QCD, helicity rules, together with the rotational covariance condition, yield an additional relation between the light-front current matrix elements

  17. Implications of the modelling of stratified hot water storage tanks in the simulation of CHP plants

    Energy Technology Data Exchange (ETDEWEB)

    Campos Celador, A., E-mail: alvaro.campos@ehu.es [ENEDI Research Group-University of the Basque Country, Departamento de Maquinas y Motores Termicos, E.T.S.I. de Bilbao Alameda de Urquijo, s/n 48013 Bilbao, Bizkaia (Spain); Odriozola, M.; Sala, J.M. [ENEDI Research Group-University of the Basque Country, Departamento de Maquinas y Motores Termicos, E.T.S.I. de Bilbao Alameda de Urquijo, s/n 48013 Bilbao, Bizkaia (Spain)

    2011-08-15

    Highlights: {yields} Three different modelling approaches for simulation of hot water tanks are presented. {yields} The three models are simulated within a residential cogeneration plant. {yields} Small differences in the results are found by an energy and exergy analysis. {yields} Big differences between the results are found by an advanced exergy analysis. {yields} Results on the feasibility study are explained by the advanced exergy analysis. - Abstract: This paper considers the effect that different hot water storage tank modelling approaches have on the global simulation of residential CHP plants as well as their impact on their economic feasibility. While a simplified assessment of the heat storage is usually considered in the feasibility studies of CHP plants in buildings, this paper deals with three different levels of modelling of the hot water tank: actual stratified model, ideal stratified model and fully mixed model. These three approaches are presented and comparatively evaluated under the same case of study, a cogeneration plant with thermal storage meeting the loads of an urbanisation located in the Bilbao metropolitan area (Spain). The case of study is simulated by TRNSYS for each one of the three modelling cases and the so obtained annual results are analysed from both a First and Second-Law-based viewpoint. While the global energy and exergy efficiencies of the plant for the three modelling cases agree quite well, important differences are found between the economic results of the feasibility study. These results can be predicted by means of an advanced exergy analysis of the storage tank considering the endogenous and exogenous exergy destruction terms caused by the hot water storage tank.

  18. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Xuejing [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); School of mathematics and statistics, Lanzhou University, Lanzhou 730000 (China); Fouladirad, Mitra, E-mail: mitra.fouladirad@utt.f [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Berenguer, Christophe [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Bordes, Laurent [Universite de Pau et des Pays de l' Adour, LMA UMR CNRS 5142, 64013 PAU Cedex (France)

    2010-08-15

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  19. Condition-based inspection/replacement policies for non-monotone deteriorating systems with environmental covariates

    International Nuclear Information System (INIS)

    Zhao Xuejing; Fouladirad, Mitra; Berenguer, Christophe; Bordes, Laurent

    2010-01-01

    The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.

  20. Modeling gross primary production in semi-arid Inner Mongolia using MODIS imagery and eddy covariance data

    Science.gov (United States)

    Ranjeet John; Jiquan Chen; Asko Noormets; Xiangming Xiao; Jianye Xu; Nan Lu; Shiping Chen

    2013-01-01

    We evaluate the modelling of carbon fluxes from eddy covariance (EC) tower observations in different water-limited land-cover/land-use (LCLU) and biome types in semi-arid Inner Mongolia, China. The vegetation photosynthesis model (VPM) and modified VPM (MVPM), driven by the enhanced vegetation index (EVI) and land-surface water index (LSWI), which were derived from the...

  1. The covariance matrix of the Potts model: A random cluster analysis

    International Nuclear Information System (INIS)

    Borgs, C.; Chayes, J.T.

    1996-01-01

    We consider the covariance matrix, G mn = q 2 x ,m); δ(σ y ,n)>, of the d-dimensional q-states Potts model, rewriting it in the random cluster representation of Fortuin and Kasteleyn. In many of the q ordered phases, we identify the eigenvalues of this matrix both in terms of representations of the unbroken symmetry group of the model and in terms of random cluster connectivities and covariances, thereby attributing algebraic significance to these stochastic geometric quantities. We also show that the correlation length and the correlation length corresponding to the decay rate of one on the eigenvalues in the same as the inverse decay rate of the diameter of finite clusters. For dimension of d=2, we show that this correlation length and the correlation length of two-point function with free boundary conditions at the corresponding dual temperature are equal up to a factor of two. For systems with first-order transitions, this relation helps to resolve certain inconsistencies between recent exact and numerical work on correlation lengths at the self-dual point β o . For systems with second order transitions, this relation implies the equality of the correlation length exponents from above below threshold, as well as an amplitude ratio of two. In the course of proving the above results, we establish several properties of independent interest, including left continuity of the inverse correlation length with free boundary conditions and upper semicontinuity of the decay rate for finite clusters in all dimensions, and left continuity of the two-dimensional free boundary condition percolation probability at β o . We also introduce DLR equations for the random cluster model and use them to establish ergodicity of the free measure. In order to prove these results, we introduce a new class of events which we call decoupling events and two inequalities for these events

  2. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  3. A three domain covariance framework for EEG/MEG data

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.

    2015-01-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three

  4. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    Science.gov (United States)

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  5. New numerical approaches for modeling thermochemical convection in a compositionally stratified fluid

    Science.gov (United States)

    Puckett, Elbridge Gerry; Turcotte, Donald L.; He, Ying; Lokavarapu, Harsha; Robey, Jonathan M.; Kellogg, Louise H.

    2018-03-01

    Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions. Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB. These regions have been associated with lower mantle structures known as large low shear velocity provinces (LLSVPS) below Africa and the South Pacific. The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves. Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle. Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods. Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or 'artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm. We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method that carries a scalar quantity representing the location of each compositional field. All four algorithms are implemented in the open source finite

  6. Bayesian estimation of covariance matrices: Application to market risk management at EDF

    International Nuclear Information System (INIS)

    Jandrzejewski-Bouriga, M.

    2012-01-01

    In this thesis, we develop new methods of regularized covariance matrix estimation, under the Bayesian setting. The regularization methodology employed is first related to shrinkage. We investigate a new Bayesian modeling of covariance matrix, based on hierarchical inverse-Wishart distribution, and then derive different estimators under standard loss functions. Comparisons between shrunk and empirical estimators are performed in terms of frequentist performance under different losses. It allows us to highlight the critical importance of the definition of cost function and show the persistent effect of the shrinkage-type prior on inference. In a second time, we consider the problem of covariance matrix estimation in Gaussian graphical models. If the issue is well treated for the decomposable case, it is not the case if you also consider non-decomposable graphs. We then describe a Bayesian and operational methodology to carry out the estimation of covariance matrix of Gaussian graphical models, decomposable or not. This procedure is based on a new and objective method of graphical-model selection, combined with a constrained and regularized estimation of the covariance matrix of the model chosen. The procedures studied effectively manage missing data. These estimation techniques were applied to calculate the covariance matrices involved in the market risk management for portfolios of EDF (Electricity of France), in particular for problems of calculating Value-at-Risk or in Asset Liability Management. (author)

  7. RADIAL STABILITY IN STRATIFIED STARS

    International Nuclear Information System (INIS)

    Pereira, Jonas P.; Rueda, Jorge A.

    2015-01-01

    We formulate within a generalized distributional approach the treatment of the stability against radial perturbations for both neutral and charged stratified stars in Newtonian and Einstein's gravity. We obtain from this approach the boundary conditions connecting any two phases within a star and underline its relevance for realistic models of compact stars with phase transitions, owing to the modification of the star's set of eigenmodes with respect to the continuous case

  8. Survival analysis with functional covariates for partial follow-up studies.

    Science.gov (United States)

    Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming

    2016-12-01

    Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of

  9. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    Science.gov (United States)

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  10. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    Science.gov (United States)

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  11. A Heat Transfer Model for a Stratified Corium-Metal Pool in the Lower Plenum of a Nuclear Reactor

    International Nuclear Information System (INIS)

    Sohal, M.S.; Siefken, L.J.

    1999-01-01

    This preliminary design report describes a model for heat transfer in a corium-metal stratified pool. It was decided to make use of the existing COUPLE model. Currently available correlations for natural convection heat transfer in a pool with and without internal heat generation were obtained. The appropriate correlations will be incorporated in the existing COUPLE model. Heat conduction and solidification modeling will be done with existing algorithms in the COUPLE. Assessment of the new model will be done by simple energy conservation problems

  12. Model-Based Prediction of Pulsed Eddy Current Testing Signals from Stratified Conductive Structures

    International Nuclear Information System (INIS)

    Zhang, Jian Hai; Song, Sung Jin; Kim, Woong Ji; Kim, Hak Joon; Chung, Jong Duk

    2011-01-01

    Excitation and propagation of electromagnetic field of a cylindrical coil above an arbitrary number of conductive plates for pulsed eddy current testing(PECT) are very complex problems due to their complicated physical properties. In this paper, analytical modeling of PECT is established by Fourier series based on truncated region eigenfunction expansion(TREE) method for a single air-cored coil above stratified conductive structures(SCS) to investigate their integrity. From the presented expression of PECT, the coil impedance due to SCS is calculated based on analytical approach using the generalized reflection coefficient in series form. Then the multilayered structures manufactured by non-ferromagnetic (STS301L) and ferromagnetic materials (SS400) are investigated by the developed PECT model. Good prediction of analytical model of PECT not only contributes to the development of an efficient solver but also can be applied to optimize the conditions of experimental setup in PECT

  13. Weak instruments and the first stage F-statistic in IV models with a nonscalar error covariance structure

    NARCIS (Netherlands)

    Bun, M.; de Haan, M.

    2010-01-01

    We analyze the usefulness of the first stage F-statistic for detecting weak instruments in the IV model with a nonscalar error covariance structure. More in particular, we question the validity of the rule of thumb of a first stage F-statistic of 10 or higher for models with correlated errors

  14. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  15. A cautionary note on generalized linear models for covariance of unbalanced longitudinal data

    KAUST Repository

    Huang, Jianhua Z.; Chen, Min; Maadooliat, Mehdi; Pourahmadi, Mohsen

    2012-01-01

    Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes

  16. Covariant field equations in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)

    2017-12-15

    Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Covariant field equations in supergravity

    International Nuclear Information System (INIS)

    Vanhecke, Bram; Proeyen, Antoine van

    2017-01-01

    Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  18. Covarient quantization of heterotic strings in supersymmetric chiral boson formulation

    International Nuclear Information System (INIS)

    Yu, F.

    1992-01-01

    This dissertation presents the covariant supersymmetric chiral boson formulation of the heterotic strings. The main feature of this formulation is the covariant quantization of the so-called leftons and rightons -- the (1,0) supersymmetric generalizations of the world-sheet chiral bosons -- that constitute basic building blocks of general heterotic-type string models. Although the (Neveu-Schwarz-Ramond or Green-Schwarz) heterotic strings provide the most realistic string models, their covariant quantization, with the widely-used Siegel formalism, has never been rigorously carried out. It is clarified in this dissertation that the covariant Siegel formalism is pathological upon quantization. As a test, a general classical covariant (NSR) heterotic string action that has the Siegel symmetry is constructed in arbitrary curved space-time coupled to (1,0) world-sheet super-gravity. In the light-cone gauge quantization, the critical dimensions are derived for such an action with leftons and rightons compactified on group manifolds G L x G R . The covariant quantization of this action does not agree with the physical results in the light-cone gauge quantization. This dissertation establishes a new formalism for the covariant quantization of heterotic strings. The desired consistent covariant path integral quantization of supersymmetric chiral bosons, and thus the general (NSR) heterotic-type strings with leftons and rightons compactified on torus circle-times d L S 1 x circle-times d R S 1 are carried out. An infinite set of auxiliary (1,0) scalar superfields is introduced to convert the second-class chiral constraint into first-class ones. The covariant gauge-fixed action has an extended BRST symmetry described by the graded algebra GL(1/1). A regularization respecting this symmetry is proposed to deal with the contributions of the infinite towers of auxiliary fields and associated ghosts

  19. Mixed model with spatial variance-covariance structure for accommodating of local stationary trend and its influence on multi-environmental crop variety trial assessment

    Energy Technology Data Exchange (ETDEWEB)

    Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.

    2014-06-01

    The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)

  20. Multiple feature fusion via covariance matrix for visual tracking

    Science.gov (United States)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  1. On the Methodology to Calculate the Covariance of Estimated Resonance Parameters

    International Nuclear Information System (INIS)

    Becker, B.; Kopecky, S.; Schillebeeckx, P.

    2015-01-01

    Principles to determine resonance parameters and their covariance from experimental data are discussed. Different methods to propagate the covariance of experimental parameters are compared. A full Bayesian statistical analysis reveals that the level to which the initial uncertainty of the experimental parameters propagates, strongly depends on the experimental conditions. For high precision data the initial uncertainties of experimental parameters, like a normalization factor, has almost no impact on the covariance of the parameters in case of thick sample measurements and conventional uncertainty propagation or full Bayesian analysis. The covariances derived from a full Bayesian analysis and least-squares fit are derived under the condition that the model describing the experimental observables is perfect. When the quality of the model can not be verified a more conservative method based on a renormalization of the covariance matrix is recommended to propagate fully the uncertainty of experimental systematic effects. Finally, neutron resonance transmission analysis is proposed as an accurate method to validate evaluated data libraries in the resolved resonance region

  2. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  3. Cross-covariance functions for multivariate random fields based on latent dimensions

    KAUST Repository

    Apanasovich, T. V.; Genton, M. G.

    2010-01-01

    The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable

  4. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  5. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  6. Predicting kidney graft failure using time-dependent renal function covariates

    NARCIS (Netherlands)

    de Bruijne, Mattheus H. J.; Sijpkens, Yvo W. J.; Paul, Leendert C.; Westendorp, Rudi G. J.; van Houwelingen, Hans C.; Zwinderman, Aeilko H.

    2003-01-01

    Chronic rejection and recurrent disease are the major causes of late graft failure in renal transplantation. To assess outcome, most researchers use Cox proportional hazard analysis with time-fixed covariates. We developed a model adding time-dependent renal function covariates to improve the

  7. Quantum mechanics vs. general covariance in gravity and string models

    International Nuclear Information System (INIS)

    Martinec, E.J.

    1984-01-01

    Quantization of simple low-dimensional systems embodying general covariance is studied. Functional methods are employed in the calculation of effective actions for fermionic strings and 1 + 1 dimensional gravity. The author finds that regularization breaks apparent symmetries of the theory, providing new dynamics for the string and non-trivial dynamics for 1 + 1 gravity. The author moves on to consider the quantization of some generally covariant systems with a finite number of physical degrees of freedom, assuming the existence of an invariant cutoff. The author finds that the wavefunction of the universe in these cases is given by the solution to simple quantum mechanics problems

  8. Using Covariant Lyapunov Vectors to Understand Spatiotemporal Chaos in Fluids

    Science.gov (United States)

    Paul, Mark; Xu, Mu; Barbish, Johnathon; Mukherjee, Saikat

    2017-11-01

    The spatiotemporal chaos of fluids present many difficult and fascinating challenges. Recent progress in computing covariant Lyapunov vectors for a variety of model systems has made it possible to probe fundamental ideas from dynamical systems theory including the degree of hyperbolicity, the fractal dimension, the dimension of the inertial manifold, and the decomposition of the dynamics into a finite number of physical modes and spurious modes. We are interested in building upon insights such as these for fluid systems. We first demonstrate the power of covariant Lyapunov vectors using a system of maps on a lattice with a nonlinear coupling. We then compute the covariant Lyapunov vectors for chaotic Rayleigh-Bénard convection for experimentally accessible conditions. We show that chaotic convection is non-hyperbolic and we quantify the spatiotemporal features of the spectrum of covariant Lyapunov vectors. NSF DMS-1622299 and DARPA/DSO Models, Dynamics, and Learning (MoDyL).

  9. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  10. Free Falling in Stratified Fluids

    Science.gov (United States)

    Lam, Try; Vincent, Lionel; Kanso, Eva

    2017-11-01

    Leaves falling in air and discs falling in water are examples of unsteady descents due to complex interaction between gravitational and aerodynamic forces. Understanding these descent modes is relevant to many branches of engineering and science such as estimating the behavior of re-entry space vehicles to studying biomechanics of seed dispersion. For regularly shaped objects falling in homogenous fluids, the motion is relatively well understood. However, less is known about how density stratification of the fluid medium affects the falling behavior. Here, we experimentally investigate the descent of discs in both pure water and in stable linearly stratified fluids for Froude numbers Fr 1 and Reynolds numbers Re between 1000 -2000. We found that stable stratification (1) enhances the radial dispersion of the disc at landing, (2) increases the descent time, (3) decreases the inclination (or nutation) angle, and (4) decreases the fluttering amplitude while falling. We conclude by commenting on how the corresponding information can be used as a predictive model for objects free falling in stratified fluids.

  11. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    Science.gov (United States)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.

  12. Galaxy-galaxy lensing estimators and their covariance properties

    Science.gov (United States)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  13. Galaxy–galaxy lensing estimators and their covariance properties

    International Nuclear Information System (INIS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez

    2017-01-01

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  14. Uncertainty covariances in robotics applications

    International Nuclear Information System (INIS)

    Smith, D.L.

    1984-01-01

    The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized

  15. Covariant w∞ gravity

    NARCIS (Netherlands)

    Bergshoeff, E.; Pope, C.N.; Stelle, K.S.

    1990-01-01

    We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.

  16. A class of Matérn-like covariance functions for smooth processes on a sphere

    KAUST Repository

    Jeong, Jaehong

    2015-02-01

    © 2014 Elsevier Ltd. There have been noticeable advancements in developing parametric covariance models for spatial and spatio-temporal data with various applications to environmental problems. However, literature on covariance models for processes defined on the surface of a sphere with great circle distance as a distance metric is still sparse, due to its mathematical difficulties. It is known that the popular Matérn covariance function, with smoothness parameter greater than 0.5, is not valid for processes on the surface of a sphere with great circle distance. We introduce an approach to produce Matérn-like covariance functions for smooth processes on the surface of a sphere that are valid with great circle distance. The resulting model is isotropic and positive definite on the surface of a sphere with great circle distance, with a natural extension for nonstationarity case. We present extensive numerical comparisons of our model, with a Matérn covariance model using great circle distance as well as chordal distance. We apply our new covariance model class to sea level pressure data, known to be smooth compared to other climate variables, from the CMIP5 climate model outputs.

  17. A class of Matérn-like covariance functions for smooth processes on a sphere

    KAUST Repository

    Jeong, Jaehong; Jun, Mikyoung

    2015-01-01

    © 2014 Elsevier Ltd. There have been noticeable advancements in developing parametric covariance models for spatial and spatio-temporal data with various applications to environmental problems. However, literature on covariance models for processes defined on the surface of a sphere with great circle distance as a distance metric is still sparse, due to its mathematical difficulties. It is known that the popular Matérn covariance function, with smoothness parameter greater than 0.5, is not valid for processes on the surface of a sphere with great circle distance. We introduce an approach to produce Matérn-like covariance functions for smooth processes on the surface of a sphere that are valid with great circle distance. The resulting model is isotropic and positive definite on the surface of a sphere with great circle distance, with a natural extension for nonstationarity case. We present extensive numerical comparisons of our model, with a Matérn covariance model using great circle distance as well as chordal distance. We apply our new covariance model class to sea level pressure data, known to be smooth compared to other climate variables, from the CMIP5 climate model outputs.

  18. Supersymmetric gauged scale covariance in ten and lower dimensions

    International Nuclear Information System (INIS)

    Nishino, Hitoshi; Rajpoot, Subhash

    2004-01-01

    We present globally supersymmetric models of gauged scale covariance in ten, six, and four dimensions. This is an application of a recent similar gauging in three dimensions for a massive self-dual vector multiplet. In ten dimensions, we couple a single vector multiplet to another vector multiplet, where the latter gauges the scale covariance of the former. Due to scale covariance, the system does not have a Lagrangian formulation, but has only a set of field equations, like Type IIB supergravity in ten dimensions. As by-products, we construct similar models in six dimensions with N=(2,0) supersymmetry, and four dimensions with N=1 supersymmetry. We finally get a similar model with N=4 supersymmetry in four dimensions with consistent interactions that have never been known before. We expect a series of descendant theories in dimensions lower than ten by dimensional reductions. This result also indicates that similar mechanisms will work for other vector and scalar multiplets in space-time lower than ten dimensions

  19. Meta-analytical synthesis of regression coefficients under different categorization scheme of continuous covariates.

    Science.gov (United States)

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-11-30

    Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Promoting Modeling and Covariational Reasoning among Secondary School Students in the Context of Big Data

    Science.gov (United States)

    Gil, Einat; Gibbs, Alison L.

    2017-01-01

    In this study, we follow students' modeling and covariational reasoning in the context of learning about big data. A three-week unit was designed to allow 12th grade students in a mathematics course to explore big and mid-size data using concepts such as trend and scatter to describe the relationships between variables in multivariate settings.…

  1. Evaluation and processing of covariance data

    International Nuclear Information System (INIS)

    Wagner, M.

    1993-01-01

    These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)

  2. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  3. Massive data compression for parameter-dependent covariance matrices

    Science.gov (United States)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  4. Visualization of mole fraction distribution of slow jet forming stably stratified field

    International Nuclear Information System (INIS)

    Fumizawa, Motoo; Hishida, Makoto

    1990-01-01

    An experimental study has been performed to investigate the behavior of flow and mass transfer in gaseous slow jet in which buoyancy force opposed the flow forming stably stratified field. The study has been performed to understand the basic features of air ingress phenomena at pipe rupture accident of the high temperature gas-cooled reactor. A displacement fringe technique was adopted in Mach-Zehnder interferometer to visualize the mole fraction distribution. As the result, the followings were obtained: (1) The stably stratified fields were formed in the vicinity of the outlet of the slow jet. The penetration distance of the stably stratified fields increased with Froude number. (2) Mass fraction distributions in the stably stratified fields were well correlated with the present model using the ramp mole velocity profile. (author)

  5. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    Science.gov (United States)

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

  6. Reconstruction of sparse connectivity in neural networks from spike train covariances

    International Nuclear Information System (INIS)

    Pernice, Volker; Rotter, Stefan

    2013-01-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L 1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous–irregular state, where spike train covariances are well described by a linear model. (paper)

  7. Large Eddy Simulation of stratified flows over structures

    OpenAIRE

    Brechler J.; Fuka V.

    2013-01-01

    We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  8. Large Eddy Simulation of stratified flows over structures

    Science.gov (United States)

    Fuka, V.; Brechler, J.

    2013-04-01

    We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  9. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  10. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Science.gov (United States)

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  11. Covariant constraints for generic massive gravity and analysis of its characteristics

    DEFF Research Database (Denmark)

    Deser, S.; Sandora, M.; Waldron, A.

    2014-01-01

    We perform a covariant constraint analysis of massive gravity valid for its entire parameter space, demonstrating that the model generically propagates 5 degrees of freedom; this is also verified by a new and streamlined Hamiltonian description. The constraint's covariant expression permits...

  12. Structure of Pioncare covariant tensor operators in quantum mechanical models

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Klink, W.H.

    1988-01-01

    The structure of operators that transform covariantly in Poincare invariant quantum mechanical models is analyzed. These operators are shown to have an interaction dependence that comes from the geometry of the Poincare group. The operators can be expressed in terms of matrix elements in a complete set of eigenstates of the mass and spin operators associated with the dynamical representation of the Poincare group. The matrix elements are factored into geometrical coefficients (Clebsch--Gordan coefficients for the Poincare group) and invariant matrix elements. The geometrical coefficients are fixed by the transformation properties of the operator and the eigenvalue spectrum of the mass and spin. The invariant matrix elements, which distinguish between different operators with the same transformation properties, are given in terms of a set of invariant form factors. copyright 1988 Academic Press, Inc

  13. Quarkonia and heavy-light mesons in a covariant quark model

    Directory of Open Access Journals (Sweden)

    Leitão Sofia

    2016-01-01

    Full Text Available Preliminary calculations using the Covariant Spectator Theory (CST employed a scalar linear confining interaction and an additional constant vector potential to compute the mesonic mass spectra. In this work we generalize the confining interaction to include more general structures, in particular a vector and also a pseudoscalar part, as suggested by a recent study [1]. A one-gluon-exchange kernel is also implemented to describe the short-range part of the interaction. We solve the simplest CST approximation to the complete Bethe-Salpeter equation, the one-channel spectator equation, using a numerical technique that eliminates all singularities from the kernel. The parameters of the model are determined through a fit to the experimental pseudoscalar meson spectra, with a good agreement for both quarkonia and heavy-light states.

  14. The optical interface of a photonic crystal: Modeling an opal with a stratified effective index

    OpenAIRE

    Maurin, Isabelle; Moufarej, Elias; Laliotis, Athanasios; Bloch, Daniel

    2014-01-01

    An artificial opal is a compact arrangement of transparent spheres, and is an archetype of a three-dimensional photonic crystal. Here, we describe the optics of an opal using a flexible model based upon a stratified medium whose (effective) index is governed by the opal density in a small planar slice of the opal. We take into account the effect of the substrate and assume a well- controlled number of layers, as it occurs for an opal fabricated by Langmuir-Blodgett deposition. The calculation...

  15. The effect of existing turbulence on stratified shear instability

    Science.gov (United States)

    Kaminski, Alexis; Smyth, William

    2017-11-01

    Ocean turbulence is an essential process governing, for example, heat uptake by the ocean. In the stably-stratified ocean interior, this turbulence occurs in discrete events driven by vertical variations of the horizontal velocity. Typically, these events have been modelled by assuming an initially laminar stratified shear flow which develops wavelike instabilities, becomes fully turbulent, and then relaminarizes into a stable state. However, in the real ocean there is always some level of turbulence left over from previous events, and it is not yet understood how this turbulence impacts the evolution of future mixing events. Here, we perform a series of direct numerical simulations of turbulent events developing in stratified shear flows that are already at least weakly turbulent. We do so by varying the amplitude of the initial perturbations, and examine the subsequent development of the instability and the impact on the resulting turbulent fluxes. This work is supported by NSF Grant OCE1537173.

  16. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    Science.gov (United States)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  17. On the fit of models to covariances and methodology to the Bulletin.

    Science.gov (United States)

    Bentler, P M

    1992-11-01

    It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.

  18. Estimating surface fluxes using eddy covariance and numerical ogive optimization

    DEFF Research Database (Denmark)

    Sievers, J.; Papakyriakou, T.; Larsen, Søren Ejling

    2015-01-01

    Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low-frequency con......Estimating representative surface fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modelling efforts, low...

  19. Large eddy simulation of turbulent and stably-stratified flows

    International Nuclear Information System (INIS)

    Fallon, Benoit

    1994-01-01

    The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr

  20. A covariant canonical description of Liouville field theory

    International Nuclear Information System (INIS)

    Papadopoulos, G.; Spence, B.

    1993-03-01

    This paper presents a new parametrisation of the space of solutions of Liouville field theory on a cylinder. In this parametrisation, the solutions are well-defined and manifestly real functions over all space-time and all of parameter space. It is shown that the resulting covariant phase space of the Liouville theory is diffeomorphic to the Hamiltonian one, and to the space of initial data of the theory. The Poisson brackets are derived and shown to be those of the co-tangent bundle of the loop group of the real line. Using Hamiltonian reduction, it is shown that this covariant phase space formulation of Liouville theory may also be obtained from the covariant phase space formulation of the Wess-Zumino-Witten model. 19 refs

  1. Covariant boost and structure functions of baryons in Gross-Neveu models

    International Nuclear Information System (INIS)

    Brendel, Wieland; Thies, Michael

    2010-01-01

    Baryons in the large N limit of two-dimensional Gross-Neveu models are reconsidered. The time-dependent Dirac-Hartree-Fock approach is used to boost a baryon to any inertial frame and shown to yield the covariant energy-momentum relation. Momentum distributions are computed exactly in arbitrary frames and used to interpolate between the rest frame and the infinite momentum frame, where they are related to structure functions. Effects from the Dirac sea depend sensitively on the occupation fraction of the valence level and the bare fermion mass and do not vanish at infinite momentum. In the case of the kink baryon, they even lead to divergent quark and antiquark structure functions at x=0.

  2. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan

    2011-10-10

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  3. A full scale approximation of covariance functions for large spatial data sets

    KAUST Repository

    Sang, Huiyan; Huang, Jianhua Z.

    2011-01-01

    Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.

  4. Comparison of chamber and eddy covariance-based CO2 and CH4 emission estimates in a heterogeneous grass ecosystem on peat

    International Nuclear Information System (INIS)

    Schrier-Uijl, A.P.; Berendse, F.; Veenendaal, E.M.; Kroon, P.S.; Hensen, A.; Leffelaar, P.A.

    2010-08-01

    Fluxes of methane (CH4) and carbon dioxide (CO2) estimated by empirical models based on small-scale chamber measurements were compared to large-scale eddy covariance (EC) measurements for CH4 and to a combination of EC measurements and EC-based models for CO2. The experimental area was a flat peat meadow in the Netherlands with heterogeneous source strengths for both greenhouse gases. Two scenarios were used to assess the importance of stratifying the landscape into landscape elements before up-scaling the fluxes measured by chambers to landscape scale: one took the main landscape elements into account (field, ditch edge ditch), the other took only the field into account. Non-linear regression models were used to up-scale the chamber measurements to field emission estimates. EC CO2 respiration consisted of measured night time EC fluxes and modeled day time fluxes using the Arrhenius model. EC CH4 flux estimate was based on daily averages and the remaining data gaps were filled by linear interpolation. The EC and chamber-based estimates agreed well when the three landscape elements were taken into account with 16.5% and 13.0% difference for CO2 respiration and CH4, respectively. However, both methods differed 31.0% and 55.1% for CO2 respiration and CH4 when only field emissions were taken into account when up-scaling chamber measurements to landscape scale. This emphasizes the importance of stratifying the landscape into landscape elements. The conclusion is that small-scale chamber measurements can be used to estimate fluxes of CO2 and CH4 at landscape scale if fluxes are scaled by different landscape elements.

  5. Distance covariance for stochastic processes

    DEFF Research Database (Denmark)

    Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady

    2017-01-01

    The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...

  6. A study of stratified gas-liquid pipe flow

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, George W.

    2005-07-01

    This work includes both theoretical modelling and experimental observations which are relevant to the design of gas condensate transport lines. Multicomponent hydrocarbon gas mixtures are transported in pipes over long distances and at various inclinations. Under certain circumstances, the heavier hydrocarbon components and/or water vapour condense to form one or more liquid phases. Near the desired capacity, the liquid condensate and water is efficiently transported in the form of a stratified flow with a droplet field. During operating conditions however, the flow rate may be reduced allowing liquid accumulation which can create serious operational problems due to large amounts of excess liquid being expelled into the receiving facilities during production ramp-up or even in steady production in severe cases. In particular, liquid tends to accumulate in upward inclined sections due to insufficient drag on the liquid from the gas. To optimize the transport of gas condensates, a pipe diameters should be carefully chosen to account for varying flow rates and pressure levels which are determined through the knowledge of the multiphase flow present. It is desirable to have a reliable numerical simulation tool to predict liquid accumulation for various flow rates, pipe diameters and pressure levels which is not presently accounted for by industrial flow codes. A critical feature of the simulation code would include the ability to predict the transition from small liquid accumulation at high flow rates to large liquid accumulation at low flow rates. A semi-intermittent flow regime of roll waves alternating with a partly backward flowing liquid film has been observed experimentally to occur for a range of gas flow rates. Most of the liquid is transported in the roll waves. The roll wave regime is not well understood and requires fundamental modelling and experimental research. The lack of reliable models for this regime leads to inaccurate prediction of the onset of

  7. Covariance Manipulation for Conjunction Assessment

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.

  8. Direct contact condensation induced transition from stratified to slug flow

    International Nuclear Information System (INIS)

    Strubelj, Luka; Ezsoel, Gyoergy; Tiselj, Iztok

    2010-01-01

    Selected condensation-induced water hammer experiments performed on PMK-2 device were numerically modelled with three-dimensional two-fluid models of computer codes NEPTUNE C FD and CFX. Experimental setup consists of the horizontal pipe filled with the hot steam that is being slowly flooded with cold water. In most of the experimental cases, slow flooding of the pipe was abruptly interrupted by a strong slugging and water hammer, while in the selected experimental runs performed at higher initial pressures and temperatures that are analysed in the present work, the transition from the stratified into the slug flow was not accompanied by the water hammer pressure peak. That makes these cases more suitable tests for evaluation of the various condensation models in the horizontally stratified flows and puts them in the range of the available CFD (Computational Fluid Dynamics) codes. The key models for successful simulation appear to be the condensation model of the hot vapour on the cold liquid and the interfacial momentum transfer model. The surface renewal types of condensation correlations, developed for condensation in the stratified flows, were used in the simulations and were applied also in the regions of the slug flow. The 'large interface' model for inter-phase momentum transfer model was compared to the bubble drag model. The CFD simulations quantitatively captured the main phenomena of the experiments, while the stochastic nature of the particular condensation-induced water hammer experiments did not allow detailed prediction of the time and position of the slug formation in the pipe. We have clearly shown that even the selected experiments without water hammer present a tough test for the applied CFD codes, while modelling of the water hammer pressure peaks in two-phase flow, being a strongly compressible flow phenomena, is beyond the capability of the current CFD codes.

  9. Large Eddy Simulation of stratified flows over structures

    Directory of Open Access Journals (Sweden)

    Brechler J.

    2013-04-01

    Full Text Available We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  10. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    Science.gov (United States)

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  11. Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics.

    Science.gov (United States)

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-06-01

    Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. The covariant entropy bound in gravitational collapse

    International Nuclear Information System (INIS)

    Gao, Sijie; Lemos, Jose P. S.

    2004-01-01

    We study the covariant entropy bound in the context of gravitational collapse. First, we discuss critically the heuristic arguments advanced by Bousso. Then we solve the problem through an exact model: a Tolman-Bondi dust shell collapsing into a Schwarzschild black hole. After the collapse, a new black hole with a larger mass is formed. The horizon, L, of the old black hole then terminates at the singularity. We show that the entropy crossing L does not exceed a quarter of the area of the old horizon. Therefore, the covariant entropy bound is satisfied in this process. (author)

  13. An alternative covariance estimator to investigate genetic heterogeneity in populations.

    Science.gov (United States)

    Heslot, Nicolas; Jannink, Jean-Luc

    2015-11-26

    For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative

  14. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  15. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    Science.gov (United States)

    Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and

  16. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    Directory of Open Access Journals (Sweden)

    Andrée-Anne S Meilleur

    Full Text Available Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination and mid-level (e.g., pattern matching tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals.We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ and Raven Progressive Matrices (RPM. We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence.In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism.Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor. Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor, which may drive perceptual abilities differently in

  17. A global bioheat model with self-tuning optimal regulation of body temperature using Hebbian feedback covariance learning.

    Science.gov (United States)

    Ong, M L; Ng, E Y K

    2005-12-01

    In the lower brain, body temperature is continually being regulated almost flawlessly despite huge fluctuations in ambient and physiological conditions that constantly threaten the well-being of the body. The underlying control problem defining thermal homeostasis is one of great enormity: Many systems and sub-systems are involved in temperature regulation and physiological processes are intrinsically complex and intertwined. Thus the defining control system has to take into account the complications of nonlinearities, system uncertainties, delayed feedback loops as well as internal and external disturbances. In this paper, we propose a self-tuning adaptive thermal controller based upon Hebbian feedback covariance learning where the system is to be regulated continually to best suit its environment. This hypothesis is supported in part by postulations of the presence of adaptive optimization behavior in biological systems of certain organisms which face limited resources vital for survival. We demonstrate the use of Hebbian feedback covariance learning as a possible self-adaptive controller in body temperature regulation. The model postulates an important role of Hebbian covariance adaptation as a means of reinforcement learning in the thermal controller. The passive system is based on a simplified 2-node core and shell representation of the body, where global responses are captured. Model predictions are consistent with observed thermoregulatory responses to conditions of exercise and rest, and heat and cold stress. An important implication of the model is that optimal physiological behaviors arising from self-tuning adaptive regulation in the thermal controller may be responsible for the departure from homeostasis in abnormal states, e.g., fever. This was previously unexplained using the conventional "set-point" control theory.

  18. A review of recent developments on turbulent entrainment in stratified flows

    International Nuclear Information System (INIS)

    Cotel, Aline J

    2010-01-01

    Stratified interfaces are present in many geophysical flow situations, and transport across such an interface is an essential factor for correctly evaluating the physical processes taking place at many spatial and temporal scales in such flows. In order to accurately evaluate vertical and lateral transport occurring when a turbulent flow impinges on a stratified interface, the turbulent entrainment and vorticity generation mechanisms near the interface must be understood and quantified. Laboratory experiments were performed for three flow configurations: a vertical thermal, a sloping gravity current and a vertical turbulent jet with various tilt angles and precession speeds. All three flows impinged on an interface separating a two-layer stably stratified environment. The entrainment rate is quantified for each flow using laser-induced fluorescence and compared to predictions of Cotel and Breidenthal (1997 Appl. Sci. Res. 57 349-66). The possible applications of transport across stratified interfaces include the contribution of hydrothermal plumes to the global ocean energy budget, turbidity currents on the ocean floor, the design of lake de-stratification systems, modeling gas leaks from storage reservoirs, weather forecasting and global climate change.

  19. Relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro: Application of a stratified model

    Science.gov (United States)

    Lee, Kang Il

    2012-08-01

    The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  20. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study

    Directory of Open Access Journals (Sweden)

    Tania Dehesh

    2015-01-01

    Full Text Available Background. Univariate meta-analysis (UM procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC, common correlation (CC, estimated correlation (EC, and multivariate multilevel correlation (MMC on the estimation bias, mean square error (MSE, and 95% probability coverage of the confidence interval (CI in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  1. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study.

    Science.gov (United States)

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  2. Covariant field equations, gauge fields and conservation laws from Yang-Mills matrix models

    International Nuclear Information System (INIS)

    Steinacker, Harold

    2009-01-01

    The effective geometry and the gravitational coupling of nonabelian gauge and scalar fields on generic NC branes in Yang-Mills matrix models is determined. Covariant field equations are derived from the basic matrix equations of motions, known as Yang-Mills algebra. Remarkably, the equations of motion for the Poisson structure and for the nonabelian gauge fields follow from a matrix Noether theorem, and are therefore protected from quantum corrections. This provides a transparent derivation and generalization of the effective action governing the SU(n) gauge fields obtained in [1], including the would-be topological term. In particular, the IKKT matrix model is capable of describing 4-dimensional NC space-times with a general effective metric. Metric deformations of flat Moyal-Weyl space are briefly discussed.

  3. Turbulent circulation above the surface heat source in stably stratified atmosphere

    Science.gov (United States)

    Kurbatskii, A. F.; Kurbatskaya, L. I.

    2016-10-01

    The 3-level RANS approach for simulating a turbulent circulation over the heat island in a stably stratified environment under nearly calm conditions is formulated. The turbulent kinetic energy its spectral consumption (dissipation) and the dispersion of turbulent fluctuations of temperature are found from differential equations, thus the correct modeling of transport processes in the interface layer with the counter-gradient heat flux is assured. The three-parameter turbulence RANS approach minimizes difficulties in simulating the turbulent transport in a stably stratified environment and reduces efforts needed for the numerical implementation of the 3-level RANS approach. Numerical simulation of the turbulent structure of the penetrative convection over the heat island under conditions of stably stratified atmosphere demonstrates that the three-equation model is able to predict the thermal circulation induced by the heat island. The temperature distribution, root-mean-square fluctuations of the turbulent velocity and temperature fields and spectral turbulent kinetic energy flux are in good agreement with the experimental data. The model describes such thin physical effects, as a crossing of vertical profiles of temperature of a thermal plume with the formation of the negative buoyancy area testifying to development of the dome-shaped form at the top part of a plume in the form of "hat".

  4. Cross-covariance based global dynamic sensitivity analysis

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng

    2018-02-01

    For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.

  5. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  6. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  7. Econometric analysis of realised covariation: high frequency covariance, regression and correlation in financial economics

    OpenAIRE

    Ole E. Barndorff-Nielsen; Neil Shephard

    2002-01-01

    This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...

  8. Natural Covariant Planck Scale Cutoffs and the Cosmic Microwave Background Spectrum.

    Science.gov (United States)

    Chatwin-Davies, Aidan; Kempf, Achim; Martin, Robert T W

    2017-07-21

    We calculate the impact of quantum gravity-motivated ultraviolet cutoffs on inflationary predictions for the cosmic microwave background spectrum. We model the ultraviolet cutoffs fully covariantly to avoid possible artifacts of covariance breaking. Imposing these covariant cutoffs results in the production of small, characteristically k-dependent oscillations in the spectrum. The size of the effect scales linearly with the ratio of the Planck to Hubble lengths during inflation. Consequently, the relative size of the effect could be as large as one part in 10^{5}; i.e., eventual observability may not be ruled out.

  9. SIMULATIONS OF WIDE-FIELD WEAK-LENSING SURVEYS. II. COVARIANCE MATRIX OF REAL-SPACE CORRELATION FUNCTIONS

    International Nuclear Information System (INIS)

    Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi

    2011-01-01

    Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.

  10. ISSUES IN NEUTRON CROSS SECTION COVARIANCES

    Energy Technology Data Exchange (ETDEWEB)

    Mattoon, C.M.; Oblozinsky,P.

    2010-04-30

    We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.

  11. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    Science.gov (United States)

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  12. Clustered multistate models with observation level random effects, mover-stayer effects and dynamic covariates: modelling transition intensities and sojourn times in a study of psoriatic arthritis.

    Science.gov (United States)

    Yiu, Sean; Farewell, Vernon T; Tom, Brian D M

    2018-02-01

    In psoriatic arthritis, it is important to understand the joint activity (represented by swelling and pain) and damage processes because both are related to severe physical disability. The paper aims to provide a comprehensive investigation into both processes occurring over time, in particular their relationship, by specifying a joint multistate model at the individual hand joint level, which also accounts for many of their important features. As there are multiple hand joints, such an analysis will be based on the use of clustered multistate models. Here we consider an observation level random-effects structure with dynamic covariates and allow for the possibility that a subpopulation of patients is at minimal risk of damage. Such an analysis is found to provide further understanding of the activity-damage relationship beyond that provided by previous analyses. Consideration is also given to the modelling of mean sojourn times and jump probabilities. In particular, a novel model parameterization which allows easily interpretable covariate effects to act on these quantities is proposed.

  13. Estimation of Fuzzy Measures Using Covariance Matrices in Gaussian Mixtures

    Directory of Open Access Journals (Sweden)

    Nishchal K. Verma

    2012-01-01

    Full Text Available This paper presents a novel computational approach for estimating fuzzy measures directly from Gaussian mixtures model (GMM. The mixture components of GMM provide the membership functions for the input-output fuzzy sets. By treating consequent part as a function of fuzzy measures, we derived its coefficients from the covariance matrices found directly from GMM and the defuzzified output constructed from both the premise and consequent parts of the nonadditive fuzzy rules that takes the form of Choquet integral. The computational burden involved with the solution of λ-measure is minimized using Q-measure. The fuzzy model whose fuzzy measures were computed using covariance matrices found in GMM has been successfully applied on two benchmark problems and one real-time electric load data of Indian utility. The performance of the resulting model for many experimental studies including the above-mentioned application is found to be better and comparable to recent available fuzzy models. The main contribution of this paper is the estimation of fuzzy measures efficiently and directly from covariance matrices found in GMM, avoiding the computational burden greatly while learning them iteratively and solving polynomial equations of order of the number of input-output variables.

  14. Monte Carlo stratified source-sampling

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    1997-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress

  15. Covariant holography of a tachyonic accelerating universe

    Energy Technology Data Exchange (ETDEWEB)

    Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)

    2014-08-15

    We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)

  16. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    Science.gov (United States)

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  17. Evaluation of covariance data for chromium, iron and nickel contained in JENDL-3.2

    International Nuclear Information System (INIS)

    Oh, Soo-Youl; Shibata, Keiichi.

    1998-01-01

    An evaluation has been made for the covariances of neutron cross sections of 52 Cr, 56 Fe, 58 Ni and 60 Ni contained in JENDL-3.2. Reactions considered were the threshold reactions such as (n, 2n), (n, nα), (n, np), (n, p), (n, d), (n, t) and (n, α), the radiative capture reaction above the resonance region, and the inelastic scattering to discrete and continuum levels. Evaluation guidelines and procedures were established during the work. A generalized least-squares fitting code GMA was used in estimating covariances for reactions of which JENDL-3.2 cross sections had been evaluated by taking account of many measured data. For cross sections that had been evaluated by nuclear reaction model calculations, the KALMAN code, which yields covariances of cross sections and of associated model parameters on the basis of the Bayesian statistics, was used in conjunction with reaction model codes EGNASH and CASTHY. The evaluated uncertainties of a few percent to 30% in the cross sections look reasonable, and the correlation matrices show understandable trends. Even though there is no strict way to confirm the validity of the evaluated covariances, tools and procedures adopted in the present work are appropriate for producing covariance files based on JENDL-3.2. The covariances obtained will be compiled into JENDL in the near future. Meanwhile, new sets of optical model and level density parameters were proposed as one of byproducts obtained from the KALMAN calculations. (author)

  18. Fast Computing for Distance Covariance

    OpenAIRE

    Huo, Xiaoming; Szekely, Gabor J.

    2014-01-01

    Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...

  19. The effect of surfactant on stratified and stratifying gas-liquid flows

    Science.gov (United States)

    Heiles, Baptiste; Zadrazil, Ivan; Matar, Omar

    2013-11-01

    We consider the dynamics of a stratified/stratifying gas-liquid flow in horizontal tubes. This flow regime is characterised by the thin liquid films that drain under gravity along the pipe interior, forming a pool at the bottom of the tube, and the formation of large-amplitude waves at the gas-liquid interface. This regime is also accompanied by the detachment of droplets from the interface and their entrainment into the gas phase. We carry out an experimental study involving axial- and radial-view photography of the flow, in the presence and absence of surfactant. We show that the effect of surfactant is to reduce significantly the average diameter of the entrained droplets, through a tip-streaming mechanism. We also highlight the influence of surfactant on the characteristics of the interfacial waves, and the pressure gradient that drives the flow. EPSRC Programme Grant EP/K003976/1.

  20. Estimation of the lifetime distribution of mechatronic systems in the presence of a covariate: A comparison among parametric, semiparametric and nonparametric models

    International Nuclear Information System (INIS)

    Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang

    2015-01-01

    In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed

  1. ERRORJ. Covariance processing code. Version 2.2

    International Nuclear Information System (INIS)

    Chiba, Go

    2004-07-01

    ERRORJ is the covariance processing code that can produce covariance data of multi-group cross sections, which are essential for uncertainty analyses of nuclear parameters, such as neutron multiplication factor. The ERRORJ code can process the covariance data of cross sections including resonance parameters, angular and energy distributions of secondary neutrons. Those covariance data cannot be processed by the other covariance processing codes. ERRORJ has been modified and the version 2.2 has been developed. This document describes the modifications and how to use. The main topics of the modifications are as follows. Non-diagonal elements of covariance matrices are calculated in the resonance energy region. Option for high-speed calculation is implemented. Perturbation amount is optimized in a sensitivity calculation. Effect of the resonance self-shielding on covariance of multi-group cross section can be considered. It is possible to read a compact covariance format proposed by N.M. Larson. (author)

  2. Electromagnetic waves in stratified media

    CERN Document Server

    Wait, James R; Fock, V A; Wait, J R

    2013-01-01

    International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne

  3. Covariance matrix estimation for stationary time series

    OpenAIRE

    Xiao, Han; Wu, Wei Biao

    2011-01-01

    We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...

  4. Relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro: application of a stratified model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kang Il [Kangwon National University, Chuncheon (Korea, Republic of)

    2012-08-15

    The present study aims to provide insight into the relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21 - 0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  5. Relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro: application of a stratified model

    International Nuclear Information System (INIS)

    Lee, Kang Il

    2012-01-01

    The present study aims to provide insight into the relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21 - 0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  6. General Galilei Covariant Gaussian Maps

    Science.gov (United States)

    Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo

    2017-09-01

    We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].

  7. White dwarf stars with chemically stratified atmospheres

    Science.gov (United States)

    Muchmore, D.

    1982-01-01

    Recent observations and theory suggest that some white dwarfs may have chemically stratified atmospheres - thin layers of hydrogen lying above helium-rich envelopes. Models of such atmospheres show that a discontinuous temperature inversion can occur at the boundary between the layers. Model spectra for layered atmospheres at 30,000 K and 50,000 K tend to have smaller decrements at 912 A, 504 A, and 228 A than uniform atmospheres would have. On the basis of their continuous extreme ultraviolet spectra, it is possible to distinguish observationally between uniform and layered atmospheres for hot white dwarfs.

  8. Estimation of covariances of Cr and Ni neutron nuclear data in JENDL-3.2

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Oh, Soo Youl [Korea Atomic Energy Research Institute, Taejon (Korea)

    2000-02-01

    Covariances of nuclear data have been estimated for 2 nuclides contained in JENDL-3.2. The nuclides considered are Cr and Ni, which are regarded as important for the nuclear design study of fast reactors. The physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. The least-squares fitting code GMA was used in estimating covariances for reactions of which JENDL-3.2 cross sections had been evaluated by taking account of measurements. Covariances of nuclear model calculations were deduced by using the KALMAN system. The covariance data obtained were compiled in the ENDF-6 format, and will be put into the JENDL-3.2 Covariance File which is one of JENDL special purpose files. (author)

  9. Comparison of bias-corrected covariance estimators for MMRM analysis in longitudinal data with dropouts.

    Science.gov (United States)

    Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori

    2017-10-01

    In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications

  10. A stratified percolation model for saturated and unsaturated flow through natural fractures

    International Nuclear Information System (INIS)

    Pyrak-Nolte, L.J.

    1990-01-01

    The geometry of the asperities of contact between the two surfaces of a fracture and of the adjacent void spaces determines fluid flow through a fracture and the mechanical deformation across a fracture. Heuristically we have developed a stratified continuum percolation model to describe this geometry based on a fractal construction that includes scale invariance and correlation of void apertures. Deformation under stress is analyzed using conservation of rock volume to correct for asperity interpenetration. Single phase flow is analyzed using a critical path along which the principal resistance is a result of laminar flow across the critical neck in this path. Results show that flow decreases with apparent aperture raised to a variable power greater than cubic, as is observed in flow experiments on natural fractures. For two phases, flow of the non-wetting phase is likewise governed by the critical neck along the critical path of largest aperture but flow of the wetting phase is governed by tortuosity. 17 refs., 10 figs

  11. Activities of covariance utilization working group

    International Nuclear Information System (INIS)

    Tsujimoto, Kazufumi

    2013-01-01

    During the past decade, there has been a interest in the calculational uncertainties induced by nuclear data uncertainties in the neutronics design of advanced nuclear system. The covariance nuclear data is absolutely essential for the uncertainty analysis. In the latest version of JENDL, JENDL-4.0, the covariance data for many nuclides, especially actinide nuclides, was substantialy enhanced. The growing interest in the uncertainty analysis and the covariance data has led to the organisation of the working group for covariance utilization under the JENDL committee. (author)

  12. Do current cosmological observations rule out all covariant Galileons?

    Science.gov (United States)

    Peirone, Simone; Frusciante, Noemi; Hu, Bin; Raveri, Marco; Silvestri, Alessandra

    2018-03-01

    We revisit the cosmology of covariant Galileon gravity in view of the most recent cosmological data sets, including weak lensing. As a higher derivative theory, covariant Galileon models do not have a Λ CDM limit and predict a very different structure formation pattern compared with the standard Λ CDM scenario. Previous cosmological analyses suggest that this model is marginally disfavored, yet cannot be completely ruled out. In this work we use a more recent and extended combination of data, and we allow for more freedom in the cosmology, by including a massive neutrino sector with three different mass hierarchies. We use the Planck measurements of cosmic microwave background temperature and polarization; baryonic acoustic oscillations measurements by BOSS DR12; local measurements of H0; the joint light-curve analysis supernovae sample; and, for the first time, weak gravitational lensing from the KiDS Collaboration. We find, that in order to provide a reasonable fit, a nonzero neutrino mass is indeed necessary, but we do not report any sizable difference among the three neutrino hierarchies. Finally, the comparison of the Bayesian evidence to the Λ CDM one shows that in all the cases considered, covariant Galileon models are statistically ruled out by cosmological data.

  13. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  14. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  15. Spatial prediction of Soil Organic Carbon contents in croplands, grasslands and forests using environmental covariates and Generalized Additive Models (Southern Belgium)

    Science.gov (United States)

    Chartin, Caroline; Stevens, Antoine; van Wesemael, Bas

    2015-04-01

    Providing spatially continuous Soil Organic Carbon data (SOC) is needed to support decisions regarding soil management, and inform the political debate with quantified estimates of the status and change of the soil resource. Digital Soil Mapping techniques are based on relations existing between a soil parameter (measured at different locations in space at a defined period) and relevant covariates (spatially continuous data) that are factors controlling soil formation and explaining the spatial variability of the target variable. This study aimed at apply DSM techniques to recent SOC content measurements (2005-2013) in three different landuses, i.e. cropland, grassland, and forest, in the Walloon region (Southern Belgium). For this purpose, SOC databases of two regional Soil Monitoring Networks (CARBOSOL for croplands and grasslands, and IPRFW for forests) were first harmonized, totalising about 1,220 observations. Median values of SOC content for croplands, grasslands, and forests, are respectively of 12.8, 29.0, and 43.1 g C kg-1. Then, a set of spatial layers were prepared with a resolution of 40 meters and with the same grid topology, containing environmental covariates such as, landuses, Digital Elevation Model and its derivatives, soil texture, C factor, carbon inputs by manure, and climate. Here, in addition to the three classical texture classes (clays, silt, and sand), we tested the use of clays + fine silt content (particles < 20 µm and related to stable carbon fraction) as soil covariate explaining SOC variations. For each of the three land uses (cropland, grassland and forest), a Generalized Additive Model (GAM) was calibrated on two thirds of respective dataset. The remaining samples were assigned to a test set to assess model performance. A backward stepwise procedure was followed to select the relevant environmental covariates using their approximate p-values (the level of significance was set at p < 0.05). Standard errors were estimated for each of

  16. The covariant-evolution-operator method in bound-state QED

    International Nuclear Information System (INIS)

    Lindgren, Ingvar; Salomonson, Sten; Aasen, Bjoern

    2004-01-01

    The methods of quantum-electrodynamical (QED) calculations on bound atomic systems are reviewed with emphasis on the newly developed covariant-evolution-operator method. The aim is to compare that method with other available methods and also to point out possibilities to combine that with standard many-body perturbation theory (MBPT) in order to perform accurate numerical QED calculations, including quasi-degeneracy, also for light elements, where the electron correlation is relatively strong. As a background, the time-independent many-body perturbation theory (MBPT) is briefly reviewed, particularly the method with extended model space. Time-dependent perturbation theory is discussed in some detail, introducing the time-evolution operator and the Gell-Mann-Low relation, generalized to an arbitrary model space. Three methods of treating the bound-state QED problem are discussed. The standard S-matrix formulation, which is restricted to a degenerate model space, is discussed only briefly. Two methods applicable also to the quasi-degenerate problem are treated in more detail, the two-times Green's-function and the covariant-evolution-operator techniques. The treatment is concentrated on the latter technique, which has been developed more recently and which has not been discussed in more detail before. A comparison of the two-times Green's-function and the covariant-evolution-operator techniques, which have great similarities, is performed. In the appendix a simple procedure is derived for expressing the evolution-operator diagrams of arbitrary order. The possibilities of merging QED in the covariant evolution-operator formulation with MBPT in a systematic way is indicated. With such a technique it might be feasible to perform accurate QED calculations also on light elements, which is presently not possible with the techniques available

  17. Quality Quantification of Evaluated Cross Section Covariances

    International Nuclear Information System (INIS)

    Varet, S.; Dossantos-Uzarralde, P.; Vayatis, N.

    2015-01-01

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the 85 Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations

  18. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  19. Improvement of covariance data for fast reactors

    International Nuclear Information System (INIS)

    Shibata, Keiichi; Hasegawa, Akira

    2000-02-01

    We estimated covariances of the JENDL-3.2 data on the nuclides and reactions needed to analyze fast-reactor cores for the past three years, and produced covariance files. The present work was undertaken to re-examine the covariance files and to make some improvements. The covariances improved are the ones for the inelastic scattering cross section of 16 O, the total cross section of 23 Na, the fission cross section of 235 U, the capture cross section of 238 U, and the resolved resonance parameters for 238 U. Moreover, the covariances of 233 U data were newly estimated by the present work. The covariances obtained were compiled in the ENDF-6 format. (author)

  20. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    Science.gov (United States)

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-02-06

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  1. AMPTRACT: an algebraic model for computing pressure tube circumferential and steam temperature transients under stratified channel coolant conditions

    International Nuclear Information System (INIS)

    Gulshani, P.; So, C.B.

    1986-10-01

    In a number of postulated accident scenarios in a CANDU reactor, some of the horizontal fuel channels are predicted to experience periods of stratified channel coolant condition which can lead to a circumferential temperature gradient around the pressure tube. To study pressure tube strain and integrity under stratified flow channel conditions, it is, necessary to determine the pressure tube circumferential temperature distribution. This paper presents an algebraic model, called AMPTRACT (Algebraic Model for Pressure Tube TRAnsient Circumferential Temperature), developed to give the transient temperature distribution in a closed form. AMPTRACT models the following modes of heat transfer: radiation from the outermost elements to the pressure tube and from the pressure to calandria tube, convection between the fuel elements and the pressure tube and superheated steam, and circumferential conduction from the exposed to submerged part of the pressure tube. An iterative procedure is used to solve the mass and energy equations in closed form for axial steam and fuel-sheath transient temperature distributions. The one-dimensional conduction equation is then solved to obtain the pressure tube circumferential transient temperature distribution in a cosine series expansion. In the limit of large times and in the absence of convection and radiation to the calandria tube, the predicted pressure tube temperature distribution reduces identically to a parabolic profile. In this limit, however, radiation cannot be ignored because the temperatures are generally high. Convection and radiation tend to flatten the parabolic distribution

  2. Lorentz Covariance of Langevin Equation

    International Nuclear Information System (INIS)

    Koide, T.; Denicol, G.S.; Kodama, T.

    2008-01-01

    Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)

  3. Research Article Comparing covariance matrices: random skewers method compared to the common principal components model

    Directory of Open Access Journals (Sweden)

    James M. Cheverud

    2007-03-01

    Full Text Available Comparisons of covariance patterns are becoming more common as interest in the evolution of relationships between traits and in the evolutionary phenotypic diversification of clades have grown. We present parallel analyses of covariance matrix similarity for cranial traits in 14 New World Monkey genera using the Random Skewers (RS, T-statistics, and Common Principal Components (CPC approaches. We find that the CPC approach is very powerful in that with adequate sample sizes, it can be used to detect significant differences in matrix structure, even between matrices that are virtually identical in their evolutionary properties, as indicated by the RS results. We suggest that in many instances the assumption that population covariance matrices are identical be rejected out of hand. The more interesting and relevant question is, How similar are two covariance matrices with respect to their predicted evolutionary responses? This issue is addressed by the random skewers method described here.

  4. Estimating local atmosphere-surface fluxes using eddy covariance and numerical Ogive optimization

    DEFF Research Database (Denmark)

    Sievers, Jakob; Papakyriakou, Tim; Larsen, Søren

    2014-01-01

    Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low-frequency cont......Estimating representative surface-fluxes using eddy covariance leads invariably to questions concerning inclusion or exclusion of low-frequency flux contributions. For studies where fluxes are linked to local physical parameters and up-scaled through numerical modeling efforts, low...

  5. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  6. Directional selection effects on patterns of phenotypic (co)variation in wild populations.

    Science.gov (United States)

    Assis, A P A; Patton, J L; Hubbe, A; Marroig, G

    2016-11-30

    Phenotypic (co)variation is a prerequisite for evolutionary change, and understanding how (co)variation evolves is of crucial importance to the biological sciences. Theoretical models predict that under directional selection, phenotypic (co)variation should evolve in step with the underlying adaptive landscape, increasing the degree of correlation among co-selected traits as well as the amount of genetic variance in the direction of selection. Whether either of these outcomes occurs in natural populations is an open question and thus an important gap in evolutionary theory. Here, we documented changes in the phenotypic (co)variation structure in two separate natural populations in each of two chipmunk species (Tamias alpinus and T. speciosus) undergoing directional selection. In populations where selection was strongest (those of T. alpinus), we observed changes, at least for one population, in phenotypic (co)variation that matched theoretical expectations, namely an increase of both phenotypic integration and (co)variance in the direction of selection and a re-alignment of the major axis of variation with the selection gradient. © 2016 The Author(s).

  7. Analysis of photonic band-gap structures in stratified medium

    DEFF Research Database (Denmark)

    Tong, Ming-Sze; Yinchao, Chen; Lu, Yilong

    2005-01-01

    in electromagnetic and microwave applications once the Maxwell's equations are appropriately modeled. Originality/value - The method validates its values and properties through extensive studies on regular and defective 1D PBG structures in stratified medium, and it can be further extended to solving more......Purpose - To demonstrate the flexibility and advantages of a non-uniform pseudo-spectral time domain (nu-PSTD) method through studies of the wave propagation characteristics on photonic band-gap (PBG) structures in stratified medium Design/methodology/approach - A nu-PSTD method is proposed...... in solving the Maxwell's equations numerically. It expands the temporal derivatives using the finite differences, while it adopts the Fourier transform (FT) properties to expand the spatial derivatives in Maxwell's equations. In addition, the method makes use of the chain-rule property in calculus together...

  8. An Adaptive Estimation of Forecast Error Covariance Parameters for Kalman Filtering Data Assimilation

    Institute of Scientific and Technical Information of China (English)

    Xiaogu ZHENG

    2009-01-01

    An adaptive estimation of forecast error covariance matrices is proposed for Kalman filtering data assimilation. A forecast error covariance matrix is initially estimated using an ensemble of perturbation forecasts. This initially estimated matrix is then adjusted with scale parameters that are adaptively estimated by minimizing -2log-likelihood of observed-minus-forecast residuals. The proposed approach could be applied to Kalman filtering data assimilation with imperfect models when the model error statistics are not known. A simple nonlinear model (Burgers' equation model) is used to demonstrate the efficacy of the proposed approach.

  9. Analysis of Turbulent Combustion in Simplified Stratified Charge Conditions

    Science.gov (United States)

    Moriyoshi, Yasuo; Morikawa, Hideaki; Komatsu, Eiji

    The stratified charge combustion system has been widely studied due to the significant potentials for low fuel consumption rate and low exhaust gas emissions. The fuel-air mixture formation process in a direct-injection stratified charge engine is influenced by various parameters, such as atomization, evaporation, and in-cylinder gas motion at high temperature and high pressure conditions. It is difficult to observe the in-cylinder phenomena in such conditions and also challenging to analyze the following stratified charge combustion. Therefore, the combustion phenomena in simplified stratified charge conditions aiming to analyze the fundamental stratified charge combustion are examined. That is, an experimental apparatus which can control the mixture distribution and the gas motion at ignition timing was developed, and the effects of turbulence intensity, mixture concentration distribution, and mixture composition on stratified charge combustion were examined. As a result, the effects of fuel, charge stratification, and turbulence on combustion characteristics were clarified.

  10. Alcohol advertising, consumption and abuse: a covariance-structural modelling look at Strickland's data.

    Science.gov (United States)

    Adlaf, E M; Kohn, P M

    1989-07-01

    Re-analysis employing covariance-structural models was conducted on Strickland's (1983) survey data on 772 drinking students from Grades 7, 9 and 11. These data bear on the relations among alcohol consumption, alcohol abuse, association with drinking peers and exposure to televised alcohol advertising. Whereas Strickland used a just-identified model which, therefore, could not be tested for goodness of fit, our re-analysis tested several alternative models, which could be contradicted by the data. One model did fit his data particularly well. Its major implications are as follows: (1) Symptomatic consumption, negative consequences and self-rated severity of alcohol-related problems apparently reflect a common underlying factor, namely alcohol abuse. (2) Use of alcohol to relieve distress and frequency of intoxication, however, appear not to reflect abuse, although frequent intoxication contributes substantially to it. (3). Alcohol advertising affects consumption directly and abuse indirectly, although peer association has far greater impact on both consumption and abuse. These findings are interpreted as lending little support to further restrictions on advertising.

  11. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    Science.gov (United States)

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Covariance descriptor fusion for target detection

    Science.gov (United States)

    Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih

    2016-05-01

    Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.

  13. Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2004-01-01

    This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....

  14. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    Science.gov (United States)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  15. Interfacial transport characteristics in a gas-liquid or an immiscible liquid-liquid stratified flow

    International Nuclear Information System (INIS)

    Inoue, A.; Aoki, S.; Aritomi, M.; Kozawa, Y.

    1982-01-01

    This paper is a review for an interfacial transport characteristics of mass, momentum and energy in a gas-liquid or a immiscible liquid-liquid stratified flow with wavy interface which have been studied in our division. In the experiment, a characteristic of wave motion and its effect to the turbulence near the interface as well as overall flow characteristics like pressure drop, position of the interface were investigated in an air-water, an air-mercury and a water-liquid metal stratified flow. On the other hand, several models based on the mixing length model and a two-equation model of turbulence, with special interfacial boundary conditions in which the wavy surface was regarded as a rough surface correspond to the wavy height, a source of turbulent energy equal to the wave energy and a damped-turbulence due to the surface tension, were proposed to predict the flow characteristics and the interfacial heat transfer in a fully developed and an undeveloped stratified flow and examined by the experimental data. (author)

  16. Cortisol covariation within parents of young children: Moderation by relationship aggression.

    Science.gov (United States)

    Saxbe, Darby E; Adam, Emma K; Schetter, Christine Dunkel; Guardino, Christine M; Simon, Clarissa; McKinney, Chelsea O; Shalowitz, Madeleine U

    2015-12-01

    Covariation in diurnal cortisol has been observed in several studies of cohabiting couples. In two such studies (Liu et al., 2013; Saxbe and Repetti, 2010), relationship distress was associated with stronger within-couple correlations, suggesting that couples' physiological linkage with each other may indicate problematic dyadic functioning. Although intimate partner aggression has been associated with dysregulation in women's diurnal cortisol, it has not yet been tested as a moderator of within-couple covariation. This study reports on a diverse sample of 122 parents who sampled salivary cortisol on matched days for two years following the birth of an infant. Partners showed strong positive cortisol covariation. In couples with higher levels of partner-perpetrated aggression reported by women at one year postpartum, both women and men had a flatter diurnal decrease in cortisol and stronger correlations with partners' cortisol sampled at the same timepoints. In other words, relationship aggression was linked both with indices of suboptimal cortisol rhythms in both members of the couples and with stronger within-couple covariation coefficients. These results persisted when relationship satisfaction and demographic covariates were included in the model. During some of the sampling days, some women were pregnant with a subsequent child, but pregnancy did not significantly moderate cortisol levels or within-couple covariation. The findings suggest that couples experiencing relationship aggression have both suboptimal neuroendocrine profiles and stronger covariation. Cortisol covariation is an understudied phenomenon with potential implications for couples' relationship functioning and physical health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Covariant canonical quantization of fields and Bohmian mechanics

    International Nuclear Information System (INIS)

    Nikolic, H.

    2005-01-01

    We propose a manifestly covariant canonical method of field quantization based on the classical De Donder-Weyl covariant canonical formulation of field theory. Owing to covariance, the space and time arguments of fields are treated on an equal footing. To achieve both covariance and consistency with standard non-covariant canonical quantization of fields in Minkowski spacetime, it is necessary to adopt a covariant Bohmian formulation of quantum field theory. A preferred foliation of spacetime emerges dynamically owing to a purely quantum effect. The application to a simple time-reparametrization invariant system and quantum gravity is discussed and compared with the conventional non-covariant Wheeler-DeWitt approach. (orig.)

  18. Do gamblers eat more salt? Testing a latent trait model of covariance in consumption.

    Science.gov (United States)

    Goodwin, Belinda C; Browne, Matthew; Rockloff, Matthew; Donaldson, Phillip

    2015-09-01

    A diverse class of stimuli, including certain foods, substances, media, and economic behaviours, may be described as 'reward-oriented' in that they provide immediate reinforcement with little initial investment. Neurophysiological and personality concepts, including dopaminergic dysfunction, reward sensitivity and rash impulsivity, each predict the existence of a latent behavioural trait that leads to increased consumption of all stimuli in this class. Whilst bivariate relationships (co-morbidities) are often reported in the literature, to our knowledge, a multivariate investigation of this possible trait has not been done. We surveyed 1,194 participants (550 male) on their typical weekly consumption of 11 types of reward-oriented stimuli, including fast food, salt, caffeine, television, gambling products, and illicit drugs. Confirmatory factor analysis was used to compare models in a 3×3 structure, based on the definition of a single latent factor (none, fixed loadings, or estimated loadings), and assumed residual covariance structure (none, a-priori / literature based, or post-hoc / data-driven). The inclusion of a single latent behavioural 'consumption' factor significantly improved model fit in all cases. Also confirming theoretical predictions, estimated factor loadings on reward-oriented indicators were uniformly positive, regardless of assumptions regarding residual covariances. Additionally, the latent trait was found to be negatively correlated with the non-reward-oriented indicators of fruit and vegetable consumption. The findings support the notion of a single behavioural trait leading to increased consumption of reward-oriented stimuli across multiple modalities. We discuss implications regarding the concentration of negative lifestyle-related health behaviours.

  19. Proofs of Contracted Length Non-covariance

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1994-01-01

    Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs

  20. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  1. Information content of household-stratified epidemics

    Directory of Open Access Journals (Sweden)

    T.M. Kinyanjui

    2016-09-01

    Full Text Available Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.

  2. Information content of household-stratified epidemics.

    Science.gov (United States)

    Kinyanjui, T M; Pellis, L; House, T

    2016-09-01

    Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Computing the transport time scales of a stratified lake on the basis of Tonolli’s model

    Directory of Open Access Journals (Sweden)

    Marco Pilotti

    2014-05-01

    Full Text Available This paper deals with a simple model to evaluate the transport time scales in thermally stratified lakes that do not necessarily completely mix on a regular annual basis. The model is based on the formalization of an idea originally proposed in Italian by Tonolli in 1964, who presented a mass balance of the water initially stored within a lake, taking into account the known seasonal evolution of its thermal structure. The numerical solution of this mass balance provides an approximation to the water age distribution for the conceptualised lake, from which an upper bound to the typical time scales widely used in limnology can be obtained. After discussing the original test case considered by Tonolli, we apply the model to Lake Iseo, a deep lake located in the North of Italy, presenting the results obtained on the basis of a 30 year series of data.

  4. Bayesian source term determination with unknown covariance of measurements

    Science.gov (United States)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  5. Grain distinct stratified nanolayers in aluminium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Donatus, U., E-mail: uyimedonatus@yahoo.com [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Thompson, G.E.; Zhou, X.; Alias, J. [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Tsai, I.-L. [Oxford Instruments NanoAnalysis, HP12 2SE, High Wycombe (United Kingdom)

    2017-02-15

    The grains of aluminium alloys have stratified nanolayers which determine their mechanical and chemical responses. In this study, the nanolayers were revealed in the grains of AA6082 (T6 and T7 conditions), AA5083-O and AA2024-T3 alloys by etching the alloys in a solution comprising 20 g Cr{sub 2}O{sub 3} + 30 ml HPO{sub 3} in 1 L H{sub 2}O. Microstructural examination was conducted on selected grains of interest using scanning electron microscopy and electron backscatter diffraction technique. It was observed that the nanolayers are orientation dependent and are parallel to the {100} planes. They have ordered and repeated tunnel squares that are flawed at the sides which are aligned in the <100> directions. These flawed tunnel squares dictate the tunnelling corrosion morphology as well as appearing to have an affect on the arrangement and sizes of the precipitation hardening particles. The inclination of the stratified nanolayers, their interpacing, and the groove sizes have significant influence on the corrosion behaviour and seeming influence on the strengthening mechanism of the investigated aluminium alloys. - Highlights: • Stratified nanolayers in aluminium alloy grains. • Relationship of the stratified nanolayers with grain orientation. • Influence of the inclinations of the stratified nanolayers on corrosion. • Influence of the nanolayers interspacing and groove sizes on hardness and corrosion.

  6. An Econometric Analysis of Modulated Realised Covariance, Regression and Correlation in Noisy Diffusion Models

    DEFF Research Database (Denmark)

    Kinnebrock, Silja; Podolskij, Mark

    This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...... process can be relaxed and how our method can be applied to non-synchronous observations. We also present an empirical study of how high-frequency correlations, regressions and covariances change through time....

  7. Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?

    Science.gov (United States)

    Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive

  8. Covariance and sensitivity data generation at ORNL

    International Nuclear Information System (INIS)

    Leal, L. C.; Derrien, H.; Larson, N. M.; Alpan, A.

    2005-01-01

    Covariance data are required to assess uncertainties in design parameters in several nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the US Evaluated Nuclear Data Library, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. In this paper we address the generation of covariance data in the resonance region done with the computer code SAMMY. SAMMY is used in the evaluation of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on the generalised least-squares formalism (Bayesian theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, it provides the resonance parameter covariances. For resonance parameter evaluations where there are no resonance parameter covariance data available, the alternative is to use an approach called the 'retroactive' resonance parameter covariance generation. In this paper, we describe the application of the retroactive covariance generation approach for the gadolinium isotopes. (authors)

  9. Structural Covariance of the Prefrontal-Amygdala Pathways Associated with Heart Rate Variability.

    Science.gov (United States)

    Wei, Luqing; Chen, Hong; Wu, Guo-Rong

    2018-01-01

    The neurovisceral integration model has shown a key role of the amygdala in neural circuits underlying heart rate variability (HRV) modulation, and suggested that reciprocal connections from amygdala to brain regions centered on the central autonomic network (CAN) are associated with HRV. To provide neuroanatomical evidence for these theoretical perspectives, the current study used covariance analysis of MRI-based gray matter volume (GMV) to map structural covariance network of the amygdala, and then determined whether the interregional structural correlations related to individual differences in HRV. The results showed that covariance patterns of the amygdala encompassed large portions of cortical (e.g., prefrontal, cingulate, and insula) and subcortical (e.g., striatum, hippocampus, and midbrain) regions, lending evidence from structural covariance analysis to the notion that the amygdala was a pivotal node in neural pathways for HRV modulation. Importantly, participants with higher resting HRV showed increased covariance of amygdala to dorsal medial prefrontal cortex and anterior cingulate cortex (dmPFC/dACC) extending into adjacent medial motor regions [i.e., pre-supplementary motor area (pre-SMA)/SMA], demonstrating structural covariance of the prefrontal-amygdala pathways implicated in HRV, and also implying that resting HRV may reflect the function of neural circuits underlying cognitive regulation of emotion as well as facilitation of adaptive behaviors to emotion. Our results, thus, provide anatomical substrates for the neurovisceral integration model that resting HRV may index an integrative neural network which effectively organizes emotional, cognitive, physiological and behavioral responses in the service of goal-directed behavior and adaptability.

  10. Covariance Evaluation Methodology for Neutron Cross Sections

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  11. Universal correlations and power-law tails in financial covariance matrices

    Science.gov (United States)

    Akemann, G.; Fischmann, J.; Vivo, P.

    2010-07-01

    We investigate whether quantities such as the global spectral density or individual eigenvalues of financial covariance matrices can be best modelled by standard random matrix theory or rather by its generalisations displaying power-law tails. In order to generate individual eigenvalue distributions a chopping procedure is devised, which produces a statistical ensemble of asset-price covariances from a single instance of financial data sets. Local results for the smallest eigenvalue and individual spacings are very stable upon reshuffling the time windows and assets. They are in good agreement with the universal Tracy-Widom distribution and Wigner surmise, respectively. This suggests a strong degree of robustness especially in the low-lying sector of the spectra, most relevant for portfolio selections. Conversely, the global spectral density of a single covariance matrix as well as the average over all unfolded nearest-neighbour spacing distributions deviate from standard Gaussian random matrix predictions. The data are in fair agreement with a recently introduced generalised random matrix model, with correlations showing a power-law decay.

  12. Torsion and geometrostasis in covariant superstrings

    Energy Technology Data Exchange (ETDEWEB)

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.

  13. Torsion and geometrostasis in covariant superstrings

    International Nuclear Information System (INIS)

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs

  14. Covariance matrices of experimental data

    International Nuclear Information System (INIS)

    Perey, F.G.

    1978-01-01

    A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U

  15. New perspective in covariance evaluation for nuclear data

    International Nuclear Information System (INIS)

    Kanda, Y.

    1992-01-01

    Methods of nuclear data evaluation have been highly developed during the past decade, especially after introducing the concept of covariance. This makes it utmost important how to evaluate covariance matrices for nuclear data. It can be said that covariance evaluation is just the nuclear data evaluation, because the covariance matrix has quantitatively decisive function in current evaluation methods. The covariance primarily represents experimental uncertainties. However, correlation of individual uncertainties between different data must be taken into account and it can not be conducted without detailed physical considerations on experimental conditions. This procedure depends on the evaluator and the estimated covariance does also. The mathematical properties of the covariance have been intensively discussed. Their physical properties should be studied to apply it to the nuclear data evaluation, and then, in this report, are reviewed to give the base for further development of the covariance application. (orig.)

  16. Keratinocytes propagated in serum-free, feeder-free culture conditions fail to form stratified epidermis in a reconstituted skin model.

    Directory of Open Access Journals (Sweden)

    Rebecca Lamb

    Full Text Available Primary human epidermal stem cells isolated from skin tissues and subsequently expanded in tissue culture are used for human therapeutic use to reconstitute skin on patients and to generate artificial skin in culture for academic and commercial research. Classically, epidermal cells, known as keratinocytes, required fibroblast feeder support and serum-containing media for serial propagation. In alignment with global efforts to remove potential animal contaminants, many serum-free, feeder-free culture methods have been developed that support derivation and growth of these cells in 2-dimensional culture. Here we show that keratinocytes grown continually in serum-free and feeder-free conditions were unable to form into a stratified, mature epidermis in a skin equivalent model. This is not due to loss of cell potential as keratinocytes propagated in serum-free, feeder-free conditions retain their ability to form stratified epidermis when re-introduced to classic serum-containing media. Extracellular calcium supplementation failed to improve epidermis development. In contrast, the addition of serum to commercial, growth media developed for serum-free expansion of keratinocytes facilitated 3-dimensional stratification in our skin equivalent model. Moreover, the addition of heat-inactivated serum improved the epidermis structure and thickness, suggesting that serum contains factors that both aid and inhibit stratification.

  17. General Practitioners' and patients' perceptions towards stratified care: a theory informed investigation.

    Science.gov (United States)

    Saunders, Benjamin; Bartlam, Bernadette; Foster, Nadine E; Hill, Jonathan C; Cooper, Vince; Protheroe, Joanne

    2016-08-31

    Stratified primary care involves changing General Practitioners' (GPs) clinical behaviour in treating patients, away from the current stepped care approach to instead identifying early treatment options that are matched to patients' risk of persistent disabling pain. This article explores the perspectives of UK-based GPs and patients about a prognostic stratified care model being developed for patients with the five most common primary care musculoskeletal pain presentations. The focus was on views about acceptability, and anticipated barriers and facilitators to the use of stratified care in routine practice. Four focus groups and six semi-structured telephone interviews were conducted with GPs (n = 23), and three focus groups with patients (n = 20). Data were analysed thematically; and identified themes examined in relation to the Theoretical Domains Framework (TDF), which facilitates comprehensive identification of behaviour change determinants. A critical approach was taken in using the TDF, examining the nuanced interrelationships between theoretical domains. Four key themes were identified: Acceptability of clinical decision-making guided by stratified care; impact on the therapeutic relationship; embedding a prognostic approach within a biomedical model; and practical issues in using stratified care. Whilst within each theme specific findings are reported, common across themes was the identified relationships between the theoretical domains of knowledge, skills, professional role and identity, environmental context and resources, and goals. Through analysis of these identified relationships it was found that, for GPs and patients to perceive stratified care as being acceptable, it must be seen to enhance GPs' knowledge and skills, not undermine GPs' and patients' respective identities and be integrated within the environmental context of the consultation with minimal disruption. Findings highlight the importance of taking into account the context of

  18. Regularized principal covariates regression and its application to finding coupled patterns in climate fields

    Science.gov (United States)

    Fischer, M. J.

    2014-02-01

    There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.

  19. Covariant perturbations of Schwarzschild black holes

    International Nuclear Information System (INIS)

    Clarkson, Chris A; Barrett, Richard K

    2003-01-01

    We present a new covariant and gauge-invariant perturbation formalism for dealing with spacetimes having spherical symmetry (or some preferred spatial direction) in the background, and apply it to the case of gravitational wave propagation in a Schwarzschild black-hole spacetime. The 1 + 3 covariant approach is extended to a '1 + 1 + 2 covariant sheet' formalism by introducing a radial unit vector in addition to the timelike congruence, and decomposing all covariant quantities with respect to this. The background Schwarzschild solution is discussed and a covariant characterization is given. We give the full first-order system of linearized 1 + 1 + 2 covariant equations, and we show how, by introducing (time and spherical) harmonic functions, these may be reduced to a system of first-order ordinary differential equations and algebraic constraints for the 1 + 1 + 2 variables which may be solved straightforwardly. We show how both odd- and even-parity perturbations may be unified by the discovery of a covariant, frame- and gauge-invariant, transverse-traceless tensor describing gravitational waves, which satisfies a covariant wave equation equivalent to the Regge-Wheeler equation for both even- and odd-parity perturbations. We show how the Zerilli equation may be derived from this tensor, and derive a similar transverse-traceless tensor equation equivalent to this equation. The so-called special quasinormal modes with purely imaginary frequency emerge naturally. The significance of the degrees of freedom in the choice of the two frame vectors is discussed, and we demonstrate that, for a certain frame choice, the underlying dynamics is governed purely by the Regge-Wheeler tensor. The two transverse-traceless Weyl tensors which carry the curvature of gravitational waves are discussed, and we give the closed system of four first-order ordinary differential equations describing their propagation. Finally, we consider the extension of this work to the study of

  20. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    Science.gov (United States)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  1. LOW-FIDELITY COVARIANCES FOR NEUTRON CROSS SECTIONS ON 57 STRUCTURAL AND 31 HEAVY NUCLEI IN THE FAST REGION

    International Nuclear Information System (INIS)

    PIGNI, M.T.; HERMAN, M.; OBLOZINSKY, P.

    2008-01-01

    We produced a large set of neutron cross section covariances in the energy range of 5 keV-20 MeV. The present set of data on 57 structural materials and 31 heavy nuclei follows our earlier work on 219 fission product materials and completes our extensive contribution to the low-fidelity covariance project (307 materials). This project aims to provide initial, low-fidelity yet consistent estimates of covariance data for nuclear criticality safety applications. The evaluation methodology combines the nuclear reaction model code EMPIRE which calculates sensitivity to nuclear reaction model parameters, and the Bayesian code KALMAN that propagates uncertainties of the model parameters to cross sections. Taking into account the large scale of the project, only marginal reference to experimental data was made. The covariances were derived from the perturbation of several key model parameters selected by the sensitivity analysis. These parameters refer to the optical model potential, the level densities and the strength of the pre-equilibrium emission. This work represents the first attempt ever to generate nuclear data covariances on such a large scale

  2. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. The stratified H-index makes scientific impact transparent

    DEFF Research Database (Denmark)

    Würtz, Morten; Schmidt, Morten

    2017-01-01

    The H-index is widely used to quantify and standardize researchers' scientific impact. However, the H-index does not account for the fact that co-authors rarely contribute equally to a paper. Accordingly, we propose the use of a stratified H-index to measure scientific impact. The stratified H......-index supplements the conventional H-index with three separate H-indices: one for first authorships, one for second authorships and one for last authorships. The stratified H-index takes scientific output, quality and individual author contribution into account....

  4. Entanglement entropy production in gravitational collapse: covariant regularization and solvable models

    Science.gov (United States)

    Bianchi, Eugenio; De Lorenzo, Tommaso; Smerlak, Matteo

    2015-06-01

    We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole "exterior entropy" and "radiation entropy." For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the "black hole fireworks" model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that ( i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, ( ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the "purifying" phase, ( iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.

  5. Entanglement entropy production in gravitational collapse: covariant regularization and solvable models

    International Nuclear Information System (INIS)

    Bianchi, Eugenio; Lorenzo, Tommaso De; Smerlak, Matteo

    2015-01-01

    We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole “exterior entropy” and “radiation entropy.” For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the “black hole fireworks” model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that (i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, (ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the “purifying” phase, (iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.

  6. Bayesian tests on components of the compound symmetry covariance matrix

    NARCIS (Netherlands)

    Mulder, J.; Fox, J.P.

    2013-01-01

    Complex dependency structures are often conditionally modeled, where random effects parameters are used to specify the natural heterogeneity in the population. When interest is focused on the dependency structure, inferences can be made from a complex covariance matrix using a marginal modeling

  7. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    Science.gov (United States)

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  8. Conformally covariant composite operators in quantum chromodynamics

    International Nuclear Information System (INIS)

    Craigie, N.S.; Dobrev, V.K.; Todorov, I.T.

    1983-03-01

    Conformal covariance is shown to determine renormalization properties of composite operators in QCD and in the C 6 3 -model at the one-loop level. Its relevance to higher order (renormalization group improved) perturbative calculations in the short distance limit is also discussed. Light cone operator product expansions and spectral representations for wave functions in QCD are derived. (author)

  9. Structural Analysis of Covariance and Correlation Matrices.

    Science.gov (United States)

    Joreskog, Karl G.

    1978-01-01

    A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…

  10. Modelling carbon and water exchange of a grazed pasture in New Zealand constrained by eddy covariance measurements.

    Science.gov (United States)

    Kirschbaum, Miko U F; Rutledge, Susanna; Kuijper, Isoude A; Mudge, Paul L; Puche, Nicolas; Wall, Aaron M; Roach, Chris G; Schipper, Louis A; Campbell, David I

    2015-04-15

    We used two years of eddy covariance (EC) measurements collected over an intensively grazed dairy pasture to better understand the key drivers of changes in soil organic carbon stocks. Analysing grazing systems with EC measurements poses significant challenges as the respiration from grazing animals can result in large short-term CO2 fluxes. As paddocks are grazed only periodically, EC observations derive from a mosaic of paddocks with very different exchange rates. This violates the assumptions implicit in the use of EC methodology. To test whether these challenges could be overcome, and to develop a tool for wider scenario testing, we compared EC measurements with simulation runs with the detailed ecosystem model CenW 4.1. Simulations were run separately for 26 paddocks around the EC tower and coupled to a footprint analysis to estimate net fluxes at the EC tower. Overall, we obtained good agreement between modelled and measured fluxes, especially for the comparison of evapotranspiration rates, with model efficiency of 0.96 for weekly averaged values of the validation data. For net ecosystem productivity (NEP) comparisons, observations were omitted when cattle grazed the paddocks immediately around the tower. With those points omitted, model efficiencies for weekly averaged values of the validation data were 0.78, 0.67 and 0.54 for daytime, night-time and 24-hour NEP, respectively. While not included for model parameterisation, simulated gross primary production also agreed closely with values inferred from eddy covariance measurements (model efficiency of 0.84 for weekly averages). The study confirmed that CenW simulations could adequately model carbon and water exchange in grazed pastures. It highlighted the critical role of animal respiration for net CO2 fluxes, and showed that EC studies of grazed pastures need to consider the best approach of accounting for this important flux to avoid unbalanced accounting. Copyright © 2015. Published by Elsevier B.V.

  11. Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2017-01-01

    The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.

  12. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Directory of Open Access Journals (Sweden)

    Githure John I

    2009-09-01

    Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction

  13. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    Science.gov (United States)

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  14. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  15. Modifications of Sp(2) covariant superfield quantization

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M.; Moshin, P.Yu

    2003-12-04

    We propose a modification of the Sp(2) covariant superfield quantization to realize a superalgebra of generating operators isomorphic to the massless limit of the corresponding superalgebra of the osp(1,2) covariant formalism. The modified scheme ensures the compatibility of the superalgebra of generating operators with extended BRST symmetry without imposing restrictions eliminating superfield components from the quantum action. The formalism coincides with the Sp(2) covariant superfield scheme and with the massless limit of the osp(1,2) covariant quantization in particular cases of gauge-fixing and solutions of the quantum master equations.

  16. Bio-Optical Data Assimilation With Observational Error Covariance Derived From an Ensemble of Satellite Images

    Science.gov (United States)

    Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter

    2018-03-01

    An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.

  17. Covariant quantizations in plane and curved spaces

    International Nuclear Information System (INIS)

    Assirati, J.L.M.; Gitman, D.M.

    2017-01-01

    We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)

  18. Covariant quantizations in plane and curved spaces

    Energy Technology Data Exchange (ETDEWEB)

    Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)

    2017-07-15

    We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)

  19. A covariant form of the Maxwell's equations in four-dimensional spaces with an arbitrary signature

    International Nuclear Information System (INIS)

    Lukac, I.

    1991-01-01

    The concept of duality in the four-dimensional spaces with the arbitrary constant metric is strictly mathematically formulated. A covariant model for covariant and contravariant bivectors in this space based on three four-dimensional vectors is proposed. 14 refs

  20. Construction of covariance matrix for experimental data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhang Jianhua

    1992-01-01

    For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained

  1. Modelling anisotropic covariance using stochastic development and sub-Riemannian frame bundle geometry

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Svane, Anne Marie

    2017-01-01

    distributions. We discuss a factorization of the frame bundle projection map through this bundle, the natural sub-Riemannian structure of the frame bundle, the effect of holonomy, and the existence of subbundles where the Hormander condition is satisfied such that the Brownian motions have smooth transition......We discuss the geometric foundation behind the use of stochastic processes in the frame bundle of a smooth manifold to build stochastic models with applications in statistical analysis of non-linear data. The transition densities for the projection to the manifold of Brownian motions developed...... in the frame bundle lead to a family of probability distributions on the manifold. We explain how data mean and covariance can be interpreted as points in the frame bundle or, more precisely, in the bundle of symmetric positive definite 2-tensors analogously to the parameters describing Euclidean normal...

  2. Analysis of flame propagation phenomenon in simplified stratified charge conditions; Tanjunkasareta sojo kyukiba ni okeru kaen denpa gensho no kansatsu

    Energy Technology Data Exchange (ETDEWEB)

    Moriyoshi, Y; Morikawa, H [Chiba University, Chiba (Japan); Kamimoto, T [Tokyo Institute of Technology, Tokyo (Japan)

    1997-10-01

    Since the local inhomogeneity of mixture concentration inside the cylinder affects the combustion characteristics, a basic research on combustion phenomenon in stratified charge conditions is required. The authors have made experiments with a constant-volume chamber, which can simulate an idealized stratified charge field by using a removable partition, to obtain the combustion characteristics. Also, numerical calculations are made using some combustion models. As a result, the important feature that the combustion speed is faster in stratified condition than in homogeneous condition can be predicted by the two-step reaction model. 4 refs., 8 figs.

  3. Sensitivity of the Geomagnetic Octupole to a Stably Stratified Layer in the Earth's Core

    Science.gov (United States)

    Yan, C.; Stanley, S.

    2017-12-01

    The presence of a stably stratified layer at the top of the core has long been proposed for Earth, based on evidence from seismology and geomagnetic secular variation. Geodynamo modeling offers a unique window to inspect the properties and dynamics in Earth's core. For example, numerical simulations have shown that magnetic field morphology is sensitive to the presence of stably stratified layers in a planet's core. Here we use the mMoSST numerical dynamo model to investigate the effects of a thin stably stratified layer at the top of the fluid outer core in Earth on the resulting large-scale geomagnetic field morphology. We find that the existence of a stable layer has significant influence on the octupolar component of the magnetic field in our models, whereas the quadrupole doesn't show an obvious trend. This suggests that observations of the geomagnetic field can be applied to provide information of the properties of this plausible stable layer, such as how thick and how stable this layer could be. Furthermore, we have examined whether the dominant thermal signature from mantle tomography at the core-mantle boundary (CMB) (a degree & order 2 spherical harmonic) can influence our results. We found that this heat flux pattern at the CMB has no outstanding effects on the quadrupole and octupole magnetic field components. Our studies suggest that if there is a stably stratified layer at the top of the Earth's core, it must be limited in terms of stability and thickness, in order to be compatible with the observed paleomagnetic record.

  4. An integrative model of evolutionary covariance: a symposium on body shape in fishes.

    Science.gov (United States)

    Walker, Jeffrey A

    2010-12-01

    A major direction of current and future biological research is to understand how multiple, interacting functional systems coordinate in producing a body that works. This understanding is complicated by the fact that organisms need to work well in multiple environments, with both predictable and unpredictable environmental perturbations. Furthermore, organismal design reflects a history of past environments and not a plan for future environments. How complex, interacting functional systems evolve, then, is a truly grand challenge. In accepting the challenge, an integrative model of evolutionary covariance is developed. The model combines quantitative genetics, functional morphology/physiology, and functional ecology. The model is used to convene scientists ranging from geneticists, to physiologists, to ecologists, to engineers to facilitate the emergence of body shape in fishes as a model system for understanding how complex, interacting functional systems develop and evolve. Body shape of fish is a complex morphology that (1) results from many developmental paths and (2) functions in many different behaviors. Understanding the coordination and evolution of the many paths from genes to body shape, body shape to function, and function to a working fish body in a dynamic environment is now possible given new technologies from genetics to engineering and new theoretical models that integrate the different levels of biological organization (from genes to ecology).

  5. Are Low-order Covariance Estimates Useful in Error Analyses?

    Science.gov (United States)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  6. Genome-Wide Scan for Adaptive Divergence and Association with Population-Specific Covariates.

    Science.gov (United States)

    Gautier, Mathieu

    2015-12-01

    In population genomics studies, accounting for the neutral covariance structure across population allele frequencies is critical to improve the robustness of genome-wide scan approaches. Elaborating on the BayEnv model, this study investigates several modeling extensions (i) to improve the estimation accuracy of the population covariance matrix and all the related measures, (ii) to identify significantly overly differentiated SNPs based on a calibration procedure of the XtX statistics, and (iii) to consider alternative covariate models for analyses of association with population-specific covariables. In particular, the auxiliary variable model allows one to deal with multiple testing issues and, providing the relative marker positions are available, to capture some linkage disequilibrium information. A comprehensive simulation study was carried out to evaluate the performances of these different models. Also, when compared in terms of power, robustness, and computational efficiency to five other state-of-the-art genome-scan methods (BayEnv2, BayScEnv, BayScan, flk, and lfmm), the proposed approaches proved highly effective. For illustration purposes, genotyping data on 18 French cattle breeds were analyzed, leading to the identification of 13 strong signatures of selection. Among these, four (surrounding the KITLG, KIT, EDN3, and ALB genes) contained SNPs strongly associated with the piebald coloration pattern while a fifth (surrounding PLAG1) could be associated to morphological differences across the populations. Finally, analysis of Pool-Seq data from 12 populations of Littorina saxatilis living in two different ecotypes illustrates how the proposed framework might help in addressing relevant ecological issues in nonmodel species. Overall, the proposed methods define a robust Bayesian framework to characterize adaptive genetic differentiation across populations. The BayPass program implementing the different models is available at http://www1.montpellier

  7. Associations of Bcl-2 rs956572 genotype groups in the structural covariance network in early-stage Alzheimer's disease.

    Science.gov (United States)

    Chang, Chiung-Chih; Chang, Ya-Ting; Huang, Chi-Wei; Tsai, Shih-Jen; Hsu, Shih-Wei; Huang, Shu-Hua; Lee, Chen-Chang; Chang, Wen-Neng; Lui, Chun-Chung; Lien, Chia-Yi

    2018-02-08

    Alzheimer's disease (AD) is a complex neurodegenerative disease, and genetic differences may mediate neuronal degeneration. In humans, a single-nucleotide polymorphism in the B-cell chronic lymphocytic leukemia/lymphoma-2 (Bcl-2) gene, rs956572, has been found to significantly modulate Bcl-2 protein expression in the brain. The Bcl-2 AA genotype has been associated with reduced Bcl-2 levels and lower gray matter volume in healthy populations. We hypothesized that different Bcl-2 genotype groups may modulate large-scale brain networks that determine neurobehavioral test scores. Gray matter structural covariance networks (SCNs) were constructed in 104 patients with AD using T1-weighted magnetic resonance imaging with seed-based correlation analysis. The patients were stratified into two genotype groups on the basis of Bcl-2 expression (G carriers, n = 76; A homozygotes, n = 28). Four SCNs characteristic of AD were constructed from seeds in the default mode network, salience network, and executive control network, and cognitive test scores served as the major outcome factor. For the G carriers, influences of the SCNs were observed mostly in the default mode network, of which the peak clusters anchored by the posterior cingulate cortex seed determined the cognitive test scores. In contrast, genetic influences in the A homozygotes were found mainly in the executive control network, and both the dorsolateral prefrontal cortex seed and the interconnected peak clusters were correlated with the clinical scores. Despite a small number of cases, the A homozygotes showed greater covariance strength than the G carriers among all four SCNs. Our results suggest that the Bcl-2 rs956572 polymorphism is associated with different strengths of structural covariance in AD that determine clinical outcomes. The greater covariance strength in the four SCNs shown in the A homozygotes suggests that different Bcl-2 polymorphisms play different modulatory roles.

  8. The effect of sediments on turbulent plume dynamics in a stratified fluid

    Science.gov (United States)

    Stenberg, Erik; Ezhova, Ekaterina; Brandt, Luca

    2017-11-01

    We report large eddy simulation results of sediment-loaded turbulent plumes in a stratified fluid. The configuration, where the plume is discharged from a round source, provides an idealized model of subglacial discharge from a submarine tidewater glacier and is a starting point for understanding the effect of sediments on the dynamics of the rising plume. The transport of sediments is modeled by means of an advection-diffusion equation where sediment settling velocity is taken into account. We initially follow the experimental setup of Sutherland (Phys. Rev. Fluids, 2016), considering uniformly stratified ambients and further extend the work to pycnocline-type stratifications typical of Greenland fjords. Apart from examining the rise height, radial spread and intrusion of the rising plume, we gain further insights of the plume dynamics by extracting turbulent characteristics and the distribution of the sediments inside the plume.

  9. Cosmology of a covariant Galilean field.

    Science.gov (United States)

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  10. The covariant chiral ring

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine; Troost, Jan [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75005 Paris (France)

    2016-03-23

    We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N=(4,4) supersymmetry in two dimensions. For seed target spaces K3 and T{sup 4}, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.

  11. GLq(N)-covariant quantum algebras and covariant differential calculus

    International Nuclear Information System (INIS)

    Isaev, A.P.; Pyatov, P.N.

    1992-01-01

    GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations are considered. It is that, up to some innessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. 25 refs

  12. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  13. On an extension of covariance

    International Nuclear Information System (INIS)

    Sebestyen, A.

    1975-07-01

    The principle of covariance is extended to coordinates corresponding to internal degrees of freedom. The conditions for a system to be isolated are given. It is shown how internal forces arise in such systems. Equations for internal fields are derived. By an interpretation of the generalized coordinates based on group theory it is shown how particles of the ordinary sense enter into the model and as a simple application the gravitational interaction of two pointlike particles is considered and the shift of the perihelion is deduced. (Sz.Z.)

  14. Ethanol dehydration to ethylene in a stratified autothermal millisecond reactor.

    Science.gov (United States)

    Skinner, Michael J; Michor, Edward L; Fan, Wei; Tsapatsis, Michael; Bhan, Aditya; Schmidt, Lanny D

    2011-08-22

    The concurrent decomposition and deoxygenation of ethanol was accomplished in a stratified reactor with 50-80 ms contact times. The stratified reactor comprised an upstream oxidation zone that contained Pt-coated Al(2)O(3) beads and a downstream dehydration zone consisting of H-ZSM-5 zeolite films deposited on Al(2)O(3) monoliths. Ethanol conversion, product selectivity, and reactor temperature profiles were measured for a range of fuel:oxygen ratios for two autothermal reactor configurations using two different sacrificial fuel mixtures: a parallel hydrogen-ethanol feed system and a series methane-ethanol feed system. Increasing the amount of oxygen relative to the fuel resulted in a monotonic increase in ethanol conversion in both reaction zones. The majority of the converted carbon was in the form of ethylene, where the ethanol carbon-carbon bonds stayed intact while the oxygen was removed. Over 90% yield of ethylene was achieved by using methane as a sacrificial fuel. These results demonstrate that noble metals can be successfully paired with zeolites to create a stratified autothermal reactor capable of removing oxygen from biomass model compounds in a compact, continuous flow system that can be configured to have multiple feed inputs, depending on process restrictions. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Modeling the Thickness of Perennial Ice Covers on Stratified Lakes of the Taylor Valley, Antarctica

    Science.gov (United States)

    Obryk, M. K.; Doran, P. T.; Hicks, J. A.; McKay, C. P.; Priscu, J. C.

    2016-01-01

    A one-dimensional ice cover model was developed to predict and constrain drivers of long term ice thickness trends in chemically stratified lakes of Taylor Valley, Antarctica. The model is driven by surface radiative heat fluxes and heat fluxes from the underlying water column. The model successfully reproduced 16 years (between 1996 and 2012) of ice thickness changes for west lobe of Lake Bonney (average ice thickness = 3.53 m; RMSE = 0.09 m, n = 118) and Lake Fryxell (average ice thickness = 4.22 m; RMSE = 0.21 m, n = 128). Long-term ice thickness trends require coupling with the thermal structure of the water column. The heat stored within the temperature maximum of lakes exceeding a liquid water column depth of 20 m can either impede or facilitate ice thickness change depending on the predominant climatic trend (temperature cooling or warming). As such, shallow (< 20 m deep water columns) perennially ice-covered lakes without deep temperature maxima are more sensitive indicators of climate change. The long-term ice thickness trends are a result of surface energy flux and heat flux from the deep temperature maximum in the water column, the latter of which results from absorbed solar radiation.

  16. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  17. Transition of Gas-Liquid Stratified Flow in Oil Transport Pipes

    Directory of Open Access Journals (Sweden)

    D. Lakehal

    2011-12-01

    Full Text Available Large-Scale Simulation results of the transition of a gas-liquid stratified flow to slug flow regime in circular 3D oil transport pipes under turbulent flow conditions expressed. Free surface flow in the pipe is treated using the Level Set method. Turbulence is approached via the LES and VLES methodologies extended to interfacial two-phase flows. It is shown that only with the Level Set method the flow transition can be accurately predicted, better than with the two-fluid phase-average model. The transition from stratified to slug flow is found to be subsequent to the merging of the secondary wave modes created by the action of gas shear (short waves with the first wave mode (high amplitude long wave. The model is capable of predicting global flow features like the onset of slugging and slug speed. In the second test case, the model predicts different kinds of slugs, the so-called operating slugs formed upstream that fill entirely the pipe with water slugs of length scales of the order of 2-4 D, and lower size (1-1.5 D disturbance slugs, featuring lower hold-up (0.8-0.9. The model predicts well the frequency of slugs. The simulations revealed important parameter effects on the results, such as two-dimensionality, pipe length, and water holdup.

  18. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  19. Examination of various roles for covariance matrices in the development, evaluation, and application of nuclear data

    International Nuclear Information System (INIS)

    Smith, D.L.

    1988-01-01

    The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs

  20. A 3-D Riesz-Covariance Texture Model for Prediction of Nodule Recurrence in Lung CT

    OpenAIRE

    Cirujeda Pol; Dicente Cid Yashin; Müller Henning; Rubin Daniel L.; Aguilera Todd A.; Jr. Billy W. Loo; Diehn Maximilian; Binefa Xavier; Depeursinge Adrien

    2016-01-01

    This paper proposes a novel imaging biomarker of lung cancer relapse from 3 D texture analysis of CT images. Three dimensional morphological nodular tissue properties are described in terms of 3 D Riesz wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances which leverage rich intra and inter variations of the feature space dimensions. When compared to the classical use of the average for feature aggregation feature covariances preserve sp...

  1. Nuclear data covariances in the Indian context

    International Nuclear Information System (INIS)

    Ganesan, S.

    2014-01-01

    The topic of covariances is recognized as an important part of several ongoing nuclear data science activities, since 2007, in the Nuclear Data Physics Centre of India (NDPCI). A Phase-1 project in collaboration with the Statistics department in Manipal University, Karnataka (Prof. K.M. Prasad and Prof. S. Nair) on nuclear data covariances was executed successfully during 2007-2011 period. In Phase-I, the NDPCI has conducted three national Theme meetings sponsored by the DAE-BRNS in 2008, 2010 and 2013 on nuclear data covariances. In Phase-1, the emphasis was on a thorough basic understanding of the concept of covariances including assigning uncertainties to experimental data in terms of partial errors and micro correlations, through a study and a detailed discussion of open literature. Towards the end of Phase-1, measurements and a first time covariance analysis of cross-sections for 58 Ni (n, p) 58 Co reaction measured in Mumbai Pelletron accelerator using 7 Li (p,n) reactions as neutron source in the MeV energy region were performed under a PhD programme on nuclear data covariances in which enrolled are two students, Shri B.S. Shivashankar and Ms. Shanti Sheela. India is also successfully evolving a team of young researchers to code nuclear data of uncertainties, with the perspectives on covariances, in the IAEA-EXFOR format. A Phase-II DAE-BRNS-NDPCI proposal of project at Manipal has been submitted and the proposal is undergoing a peer-review at this time. In Phase-2, modern nuclear data evaluation techniques that including covariances will be further studied as a research and development effort, as a first time effort. These efforts include the use of techniques such as that of the Kalman filter. Presently, a 48 hours lecture series on treatment of errors and their propagation is being formulated under auspices of the Homi Bhabha National Institute. The talk describes the progress achieved thus far in the learning curve of the above-mentioned and exciting

  2. The generally covariant locality principle - a new paradigm for local quantum field theory

    International Nuclear Information System (INIS)

    Brunetti, R.; Fredenhagen, K.; Verch, R.

    2002-05-01

    A new approach to the model-independent description of quantum field theories will be introduced in the present work. The main feature of this new approach is to incorporate in a local sense the principle of general covariance of general relativity, thus giving rise to the concept of a locally covariant quantum field theory. Such locally covariant quantum field theories will be described mathematically in terms of covariant functors between the categories, on one side, of globally hyperbolic spacetimes with isometric embeddings as morphisms and, on the other side, of *-algebras with unital injective *-endomorphisms as morphisms. Moreover, locally covariant quantum fields can be described in this framework as natural transformations between certain functors. The usual Haag-Kastler framework of nets of operator-algebras over a fixed spacetime background-manifold, together with covariant automorphic actions of the isometry-group of the background spacetime, can be re-gained from this new approach as a special case. Examples of this new approach are also outlined. In case that a locally covariant quantum field theory obeys the time-slice axiom, one can naturally associate to it certain automorphic actions, called ''relative Cauchy-evolutions'', which describe the dynamical reaction of the quantum field theory to a local change of spacetime background metrics. The functional derivative of a relative Cauchy-evolution with respect to the spacetime metric is found to be a divergence-free quantity which has, as will be demonstrated in an example, the significance of an energy-momentum tensor for the locally covariant quantum field theory. Furthermore, we discuss the functorial properties of state spaces of locally covariant quantum field theories that entail the validity of the principle of local definiteness. (orig.)

  3. AFCI-2.0 Neutron Cross Section Covariance Library

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.

    2011-03-01

    The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural

  4. AFCI-2.0 Neutron Cross Section Covariance Library

    International Nuclear Information System (INIS)

    Herman, M.; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.

    2011-01-01

    The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R and D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78

  5. Experimental and numerical investigation of stratified gas-liquid flow in inclined circular pipes

    International Nuclear Information System (INIS)

    Faccini, J.L.H.; Sampaio, P.A.B. de; Botelho, M.H.D.S.; Cunha, M.V.; Cunha Filho, J.S.; Su, J.

    2012-01-01

    In this paper, a stratified gas-liquid flow is experimentally and numerically investigated. Two measurement techniques, namely an ultrasonic technique and a visualization technique, are applied on an inclined circular test section using a fast single transducer pulse-echo technique and a high-speed camera. A numerical model is employed to simulate the stratified gas-liquid flow, formed by a system of non-linear differential equations consisting of the Reynolds averaged Navier-Stokes equations with the κ-ω turbulence model. The test section used in this work is comprised mainly of a transparent circular pipe with inner diameter 1 inch, and inclination angles varying from -2.5 to -10.0 degrees. Numerical solutions are obtained for the liquid height as a function of inclination angles, and compared with our own experimental data. (author)

  6. Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation

    Science.gov (United States)

    Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.

    2018-01-01

    Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.

  7. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  8. An analysis direct-contact condensation in horizontal cocurrent stratified flow of steam and cold water

    International Nuclear Information System (INIS)

    Lee, Suk Ho; Kim, Hho Jung

    1992-01-01

    The physical benchmark problem on the direct-contact condensation under the horizontal cocurrent stratified flow was analyzed using the RELAP5/MOD2 and /MOD3 one-dimensional model. Analysis was performed for the Northwestern experiments, which involved condensing steam/water flow in a rectangular channel. The study showed that the RELAP5 interfacial heat transfer model, under the horizontal stratified flow regime, predicted the condensation rate well though the interfacial heat transfer area was underpredicted. However, some discrepancies in water layer thickness and local heat transfer coefficient with experimental results were found especially when there is a wavy interface, and those were satisfied only within the range. (Author)

  9. Experience in using the covariances of some ENDF/B-V dosimetry cross sections: proposed improvements and addition of cross-reaction covariances

    International Nuclear Information System (INIS)

    Fu, C.Y.; Hetrick, D.M.

    1982-01-01

    Recent ratio data, with carefully evaluated covariances, were combined with eleven of the ENDF/B-V dosimetry cross sections using the generalized least-squares method. The purpose was to improve these evaluated cross sections and covariances, as well as to generate values for the cross-reaction covariances. The results represent improved cross sections as well as realistic and usable covariances. The latter are necessary for meaningful intergral-differential comparisons and for spectrum unfolding

  10. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  11. FDTD scattered field formulation for scatterers in stratified dispersive media.

    Science.gov (United States)

    Olkkonen, Juuso

    2010-03-01

    We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.

  12. Schroedinger covariance states in anisotropic waveguides

    International Nuclear Information System (INIS)

    Angelow, A.; Trifonov, D.

    1995-03-01

    In this paper Squeezed and Covariance States based on Schroedinger inequality and their connection with other nonclassical states are considered for particular case of anisotropic waveguide in LiNiO 3 . Here, the problem of photon creation and generation of squeezed and Schroedinger covariance states in optical waveguides is solved in two steps: 1. Quantization of electromagnetic field is provided in the presence of dielectric waveguide using normal-mode expansion. The photon creation and annihilation operators are introduced, expanding the solution A-vector(r-vector,t) in a series in terms of the Sturm - Liouville mode-functions. 2. In terms of these operators the Hamiltonian of the field in a nonlinear waveguide is derived. For such Hamiltonian we construct the covariance states as stable (with nonzero covariance), which minimize the Schroedinger uncertainty relation. The evolutions of the three second momenta of q-circumflex j and p-circumflex j are calculated. For this Hamiltonian all three momenta are expressed in terms of one real parameters s only. It is found out how covariance, via this parameter s, depends on the waveguide profile n(x,y), on the mode-distributions u-vector j (x,y), and on the waveguide phase mismatching Δβ. (author). 37 refs

  13. Form of the manifestly covariant Lagrangian

    Science.gov (United States)

    Johns, Oliver Davis

    1985-10-01

    The preferred form for the manifestly covariant Lagrangian function of a single, charged particle in a given electromagnetic field is the subject of some disagreement in the textbooks. Some authors use a ``homogeneous'' Lagrangian and others use a ``modified'' form in which the covariant Hamiltonian function is made to be nonzero. We argue in favor of the ``homogeneous'' form. We show that the covariant Lagrangian theories can be understood only if one is careful to distinguish quantities evaluated on the varied (in the sense of the calculus of variations) world lines from quantities evaluated on the unvaried world lines. By making this distinction, we are able to derive the Hamilton-Jacobi and Klein-Gordon equations from the ``homogeneous'' Lagrangian, even though the covariant Hamiltonian function is identically zero on all world lines. The derivation of the Klein-Gordon equation in particular gives Lagrangian theoretical support to the derivations found in standard quantum texts, and is also shown to be consistent with the Feynman path-integral method. We conclude that the ``homogeneous'' Lagrangian is a completely adequate basis for covariant Lagrangian theory both in classical and quantum mechanics. The article also explores the analogy with the Fermat theorem of optics, and illustrates a simple invariant notation for the Lagrangian and other four-vector equations.

  14. Determination of covariant Schwinger terms in anomalous gauge theories

    International Nuclear Information System (INIS)

    Kelnhofer, G.

    1991-01-01

    A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the covariant commutator anomalies are calculated for the two- and four dimensional case. (orig.)

  15. Cross-population myelination covariance of human cerebral cortex.

    Science.gov (United States)

    Ma, Zhiwei; Zhang, Nanyin

    2017-09-01

    Cross-population covariance of brain morphometric quantities provides a measure of interareal connectivity, as it is believed to be determined by the coordinated neurodevelopment of connected brain regions. Although useful, structural covariance analysis predominantly employed bulky morphological measures with mixed compartments, whereas studies of the structural covariance of any specific subdivisions such as myelin are rare. Characterizing myelination covariance is of interest, as it will reveal connectivity patterns determined by coordinated development of myeloarchitecture between brain regions. Using myelin content MRI maps from the Human Connectome Project, here we showed that the cortical myelination covariance was highly reproducible, and exhibited a brain organization similar to that previously revealed by other connectivity measures. Additionally, the myelination covariance network shared common topological features of human brain networks such as small-worldness. Furthermore, we found that the correlation between myelination covariance and resting-state functional connectivity (RSFC) was uniform within each resting-state network (RSN), but could considerably vary across RSNs. Interestingly, this myelination covariance-RSFC correlation was appreciably stronger in sensory and motor networks than cognitive and polymodal association networks, possibly due to their different circuitry structures. This study has established a new brain connectivity measure specifically related to axons, and this measure can be valuable to investigating coordinated myeloarchitecture development. Hum Brain Mapp 38:4730-4743, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Exploring the role of wave drag in the stable stratified oceanic and atmospheric bottom boundary layer in the cnrs-toulouse (cnrm-game) large stratified water flume

    NARCIS (Netherlands)

    Kleczek, M.; Steeneveld, G.J.; Paci, A.; Calmer, R.; Belleudy, A.; Canonici, J.C.; Murguet, F.; Valette, V.

    2014-01-01

    This paper reports on a laboratory experiment in the CNRM-GAME (Toulouse) stratified water flume of a stably stratified boundary layer, in order to quantify the momentum transfer due to orographically induced gravity waves by gently undulating hills in a boundary layer flow. In a stratified fluid, a

  17. Covariant extensions and the nonsymmetric unified field

    International Nuclear Information System (INIS)

    Borchsenius, K.

    1976-01-01

    The problem of generally covariant extension of Lorentz invariant field equations, by means of covariant derivatives extracted from the nonsymmetric unified field, is considered. It is shown that the contracted curvature tensor can be expressed in terms of a covariant gauge derivative which contains the gauge derivative corresponding to minimal coupling, if the universal constant p, characterizing the nonsymmetric theory, is fixed in terms of Planck's constant and the elementary quantum of charge. By this choice the spinor representation of the linear connection becomes closely related to the spinor affinity used by Infeld and Van Der Waerden (Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl.; 9:380 (1933)) in their generally covariant formulation of Dirac's equation. (author)

  18. Computing more proper covariances of energy dependent nuclear data

    International Nuclear Information System (INIS)

    Vanhanen, R.

    2016-01-01

    Highlights: • We present conditions for covariances of energy dependent nuclear data to be proper. • We provide methods to detect non-positive and inconsistent covariances in ENDF-6 format. • We propose methods to find nearby more proper covariances. • The methods can be used as a part of a quality assurance program. - Abstract: We present conditions for covariances of energy dependent nuclear data to be proper in the sense that the covariances are positive, i.e., its eigenvalues are non-negative, and consistent with respect to the sum rules of nuclear data. For the ENDF-6 format covariances we present methods to detect non-positive and inconsistent covariances. These methods would be useful as a part of a quality assurance program. We also propose methods that can be used to find nearby more proper energy dependent covariances. These methods can be used to remove unphysical components, while preserving most of the physical components. We consider several different senses in which the nearness can be measured. These methods could be useful if a re-evaluation of improper covariances is not feasible. Two practical examples are processed and analyzed. These demonstrate some of the properties of the methods. We also demonstrate that the ENDF-6 format covariances of linearly dependent nuclear data should usually be encoded with the derivation rules.

  19. Covariance Spectroscopy for Fissile Material Detection

    International Nuclear Information System (INIS)

    Trainham, Rusty; Tinsley, Jim; Hurley, Paul; Keegan, Ray

    2009-01-01

    Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem

  20. Non-Critical Covariant Superstrings

    CERN Document Server

    Grassi, P A

    2005-01-01

    We construct a covariant description of non-critical superstrings in even dimensions. We construct explicitly supersymmetric hybrid type variables in a linear dilaton background, and study an underlying N=2 twisted superconformal algebra structure. We find similarities between non-critical superstrings in 2n+2 dimensions and critical superstrings compactified on CY_(4-n) manifolds. We study the spectrum of the non-critical strings, and in particular the Ramond-Ramond massless fields. We use the supersymmetric variables to construct the non-critical superstrings sigma-model action in curved target space backgrounds with coupling to the Ramond-Ramond fields. We consider as an example non-critical type IIA strings on AdS_2 background with Ramond-Ramond 2-form flux.

  1. Covariate Imbalance and Precision in Measuring Treatment Effects

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2011-01-01

    Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…

  2. Covariant description of Hamiltonian form for field dynamics

    International Nuclear Information System (INIS)

    Ozaki, Hiroshi

    2005-01-01

    Hamiltonian form of field dynamics is developed on a space-like hypersurface in space-time. A covariant Poisson bracket on the space-like hypersurface is defined and it plays a key role to describe every algebraic relation into a covariant form. It is shown that the Poisson bracket has the same symplectic structure that was brought in the covariant symplectic approach. An identity invariant under the canonical transformations is obtained. The identity follows a canonical equation in which the interaction Hamiltonian density generates a deformation of the space-like hypersurface. The equation just corresponds to the Yang-Feldman equation in the Heisenberg pictures in quantum field theory. By converting the covariant Poisson bracket on the space-like hypersurface to four-dimensional commutator, we can pass over to quantum field theory in the Heisenberg picture without spoiling the explicit relativistic covariance. As an example the canonical QCD is displayed in a covariant way on a space-like hypersurface

  3. Dimension from covariance matrices.

    Science.gov (United States)

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  4. ANL Critical Assembly Covariance Matrix Generation - Addendum

    Energy Technology Data Exchange (ETDEWEB)

    McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-13

    In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.

  5. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    Science.gov (United States)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  6. Diagnostic accuracy of the STRATIFY clinical prediction rule for falls: A systematic review and meta-analysis

    LENUS (Irish Health Repository)

    Billington, Jennifer

    2012-08-07

    AbstractBackgroundThe STRATIFY score is a clinical prediction rule (CPR) derived to assist clinicians to identify patients at risk of falling. The purpose of this systematic review and meta-analysis is to determine the overall diagnostic accuracy of the STRATIFY rule across a variety of clinical settings.MethodsA literature search was performed to identify all studies that validated the STRATIFY rule. The methodological quality of the studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. A STRATIFY score of ≥2 points was used to identify individuals at higher risk of falling. All included studies were combined using a bivariate random effects model to generate pooled sensitivity and specificity of STRATIFY at ≥2 points. Heterogeneity was assessed using the variance of logit transformed sensitivity and specificity.ResultsSeventeen studies were included in our meta-analysis, incorporating 11,378 patients. At a score ≥2 points, the STRATIFY rule is more useful at ruling out falls in those classified as low risk, with a greater pooled sensitivity estimate (0.67, 95% CI 0.52–0.80) than specificity (0.57, 95% CI 0.45 – 0.69). The sensitivity analysis which examined the performance of the rule in different settings and subgroups also showed broadly comparable results, indicating that the STRATIFY rule performs in a similar manner across a variety of different ‘at risk’ patient groups in different clinical settings.ConclusionThis systematic review shows that the diagnostic accuracy of the STRATIFY rule is limited and should not be used in isolation for identifying individuals at high risk of falls in clinical practice.

  7. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  8. Predicting the risk of toxic blooms of golden alga from cell abundance and environmental covariates

    Science.gov (United States)

    Patino, Reynaldo; VanLandeghem, Matthew M.; Denny, Shawn

    2016-01-01

    Golden alga (Prymnesium parvum) is a toxic haptophyte that has caused considerable ecological damage to marine and inland aquatic ecosystems worldwide. Studies focused primarily on laboratory cultures have indicated that toxicity is poorly correlated with the abundance of golden alga cells. This relationship, however, has not been rigorously evaluated in the field where environmental conditions are much different. The ability to predict toxicity using readily measured environmental variables and golden alga abundance would allow managers rapid assessments of ichthyotoxicity potential without laboratory bioassay confirmation, which requires additional resources to accomplish. To assess the potential utility of these relationships, several a priori models relating lethal levels of golden alga ichthyotoxicity to golden alga abundance and environmental covariates were constructed. Model parameters were estimated using archived data from four river basins in Texas and New Mexico (Colorado, Brazos, Red, Pecos). Model predictive ability was quantified using cross-validation, sensitivity, and specificity, and the relative ranking of environmental covariate models was determined by Akaike Information Criterion values and Akaike weights. Overall, abundance was a generally good predictor of ichthyotoxicity as cross validation of golden alga abundance-only models ranged from ∼ 80% to ∼ 90% (leave-one-out cross-validation). Environmental covariates improved predictions, especially the ability to predict lethally toxic events (i.e., increased sensitivity), and top-ranked environmental covariate models differed among the four basins. These associations may be useful for monitoring as well as understanding the abiotic factors that influence toxicity during blooms.

  9. Lorentz-like covariant equations of non-relativistic fluids

    International Nuclear Information System (INIS)

    Montigny, M de; Khanna, F C; Santana, A E

    2003-01-01

    We use a geometrical formalism of Galilean invariance to build various hydrodynamics models. It consists in embedding the Newtonian spacetime into a non-Euclidean 4 + 1 space and provides thereby a procedure that unifies models otherwise apparently unrelated. After expressing the Navier-Stokes equation within this framework, we show that slight modifications of its Lagrangian allow us to recover the Chaplygin equation of state as well as models of superfluids for liquid helium (with both its irrotational and rotational components). Other fluid equations are also expressed in a covariant form

  10. An experimental investigation of stratified two-phase pipe flow at small inclinations

    Energy Technology Data Exchange (ETDEWEB)

    Espedal, Mikal

    1998-12-31

    The prediction of stratified flow is important for several industrial applications. Stratified flow experiments were carefully performed in order to investigate the performance of a typical model which uses wall friction factors based on single phase pipe flow as described above. The test facility has a 18.5 m long and 60 mm i.d. (L/D=300) acrylic test section which can be inclined between -10 {sup o} and +10 {sup o}. The liquid holdup was measured by using fast closing valves and the pressure gradients by using three differential pressure transducers. Interfacial waves were measured by thin wire conductance probes mounted in a plane perpendicular to the main flow. The experiments were performed using water and air at atmospheric pressure. The selected test section inclinations were between -3 {sup o} and +0.5 {sup o} to the horizontal plane. A large number of experiments were performed for different combinations of air and water flow rates and the rates were limited to avoid slug flow and stratified flow with liquid droplets. The pressure gradient and the liquid holdup were measured. In addition the wave probes were used to find the wave heights and the wave power spectra. The results show that the predicted pressure gradient using the standard models is approximately 30% lower than the measured value when large amplitude waves are present. When the flow is driven by the interfacial force the test section inclination has minor influence on the deviation between predicted and measured pressure gradients. Similar trends are apparent in data from the literature, although they seem to have gone unnoticed. For several data sets large spread in the predictions are observed when the model described above was used. Gas wall shear stress experiments indicate that the main cause of the deviation between measured and predicted pressure gradient and holdup resides in the modelling of the liquid wall friction term. Measurements of the liquid wall shear stress distribution

  11. Friedmann cosmology with a cosmological 'constant' in the scale covariant theory

    International Nuclear Information System (INIS)

    Beesham, A.

    1986-01-01

    Homogeneous isotropic cosmologies in the presence of a cosmological 'constant' are studied in the scale covariant theory. A class of solutions is obtained for kappa = 0 for models filled with dust, radiation or stiff matter. For kappa not= 0, solutions are presented for the radiation models. (author)

  12. Crystallization of a compositionally stratified basal magma ocean

    Science.gov (United States)

    Laneuville, Matthieu; Hernlund, John; Labrosse, Stéphane; Guttenberg, Nicholas

    2018-03-01

    Earth's ∼3.45 billion year old magnetic field is regenerated by dynamo action in its convecting liquid metal outer core. However, convection induces an isentropic thermal gradient which, coupled with a high core thermal conductivity, results in rapid conducted heat loss. In the absence of implausibly high radioactivity or alternate sources of motion to drive the geodynamo, the Earth's early core had to be significantly hotter than the melting point of the lower mantle. While the existence of a dense convecting basal magma ocean (BMO) has been proposed to account for high early core temperatures, the requisite physical and chemical properties for a BMO remain controversial. Here we relax the assumption of a well-mixed convecting BMO and instead consider a BMO that is initially gravitationally stratified owing to processes such as mixing between metals and silicates at high temperatures in the core-mantle boundary region during Earth's accretion. Using coupled models of crystallization and heat transfer through a stratified BMO, we show that very high temperatures could have been trapped inside the early core, sequestering enough heat energy to run an ancient geodynamo on cooling power alone.

  13. The Goodness of Covariance Selection Problem from AUC Bounds

    OpenAIRE

    Khajavi, Navid Tafaghodi; Kuh, Anthony

    2016-01-01

    We conduct a study of graphical models and discuss the quality of model selection approximation by formulating the problem as a detection problem and examining the area under the curve (AUC). We are specifically looking at the model selection problem for jointly Gaussian random vectors. For Gaussian random vectors, this problem simplifies to the covariance selection problem which is widely discussed in literature by Dempster [1]. In this paper, we give the definition for the correlation appro...

  14. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  15. Aligning the Economic Value of Companion Diagnostics and Stratified Medicines

    Directory of Open Access Journals (Sweden)

    Edward D. Blair

    2012-11-01

    Full Text Available The twin forces of payors seeking fair pricing and the rising costs of developing new medicines has driven a closer relationship between pharmaceutical companies and diagnostics companies, because stratified medicines, guided by companion diagnostics, offer better commercial, as well as clinical, outcomes. Stratified medicines have created clinical success and provided rapid product approvals, particularly in oncology, and indeed have changed the dynamic between drug and diagnostic developers. The commercial payback for such partnerships offered by stratified medicines has been less well articulated, but this has shifted as the benefits in risk management, pricing and value creation for all stakeholders become clearer. In this larger healthcare setting, stratified medicine provides both physicians and patients with greater insight on the disease and provides rationale for providers to understand cost-effectiveness of treatment. This article considers how the economic value of stratified medicine relationships can be recognized and translated into better outcomes for all healthcare stakeholders.

  16. Large eddy simulation of stably stratified turbulence

    International Nuclear Information System (INIS)

    Shen Zhi; Zhang Zhaoshun; Cui Guixiang; Xu Chunxiao

    2011-01-01

    Stably stratified turbulence is a common phenomenon in atmosphere and ocean. In this paper the large eddy simulation is utilized for investigating homogeneous stably stratified turbulence numerically at Reynolds number Re = uL/v = 10 2 ∼10 3 and Froude number Fr = u/NL = 10 −2 ∼10 0 in which u is root mean square of velocity fluctuations, L is integral scale and N is Brunt-Vaïsälä frequency. Three sets of computation cases are designed with different initial conditions, namely isotropic turbulence, Taylor Green vortex and internal waves, to investigate the statistical properties from different origins. The computed horizontal and vertical energy spectra are consistent with observation in atmosphere and ocean when the composite parameter ReFr 2 is greater than O(1). It has also been found in this paper that the stratification turbulence can be developed under different initial velocity conditions and the internal wave energy is dominated in the developed stably stratified turbulence.

  17. Neutron spectrum adjustment. The role of covariances

    International Nuclear Information System (INIS)

    Remec, I.

    1992-01-01

    Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl

  18. Covariance functions across herd production levels for test day records on milk, fat, and protein yields

    NARCIS (Netherlands)

    Veerkamp, R.F.; Goddard, M.E.

    1998-01-01

    Multiple-trait BLUP evaluations of test day records require a large number of genetic parameters. This study estimated covariances with a reduced model that included covariance functions in two dimensions (stage of lactation and herd production level) and all three yield traits. Records came from

  19. The method of covariant symbols in curved space-time

    International Nuclear Information System (INIS)

    Salcedo, L.L.

    2007-01-01

    Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)

  20. Stratified medicine and reimbursement issues

    NARCIS (Netherlands)

    Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten

    2012-01-01

    Stratified Medicine (SM) has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to

  1. Instabilities of continuously stratified zonal equatorial jets in a periodic channel model

    Directory of Open Access Journals (Sweden)

    S. Masina

    2002-05-01

    Full Text Available Several numerical experiments are performed in a nonlinear, multi-level periodic channel model centered on the equator with different zonally uniform background flows which resemble the South Equatorial Current (SEC. Analysis of the simulations focuses on identifying stability criteria for a continuously stratified fluid near the equator. A 90 m deep frontal layer is required to destabilize a zonally uniform, 10° wide, westward surface jet that is symmetric about the equator and has a maximum velocity of 100 cm/s. In this case, the phase velocity of the excited unstable waves is very similar to the phase speed of the Tropical Instability Waves (TIWs observed in the eastern Pacific Ocean. The vertical scale of the baroclinic waves corresponds to the frontal layer depth and their phase speed increases as the vertical shear of the jet is doubled. When the westward surface parabolic jet is made asymmetric about the equator, in order to simulate more realistically the structure of the SEC in the eastern Pacific, two kinds of instability are generated. The oscillations that grow north of the equator have a baroclinic nature, while those generated on and very close to the equator have a barotropic nature.  This study shows that the potential for baroclinic instability in the equatorial region can be as large as at mid-latitudes, if the tendency of isotherms to have a smaller slope for a given zonal velocity, when the Coriolis parameter vanishes, is compensated for by the wind effect.Key words. Oceanography: general (equatorial oceanography; numerical modeling – Oceanography: physics (fronts and jets

  2. A prognostic model of triple-negative breast cancer based on miR-27b-3p and node status.

    Directory of Open Access Journals (Sweden)

    Songjie Shen

    Full Text Available Triple-negative breast cancer (TNBC is an aggressive but heterogeneous subtype of breast cancer. This study aimed to identify and validate a prognostic signature for TNBC patients to improve prognostic capability and to guide individualized treatment.We retrospectively analyzed the prognostic performance of clinicopathological characteristics and miRNAs in a training set of 58 patients with invasive ductal TNBC diagnosed between 2002 and 2012. A prediction model was developed based on independent clinicopathological and miRNA covariates. The prognostic value of the model was further validated in a separate set of 41 TNBC patients diagnosed between 2007 and 2008.Only lymph node status was marginally significantly associated with poor prognosis of TNBC (P = 0.054, whereas other clinicopathological factors, including age, tumor size, histological grade, lymphovascular invasion, P53 status, Ki-67 index, and type of surgery, were not. The expression levels of miR-27b-3p, miR-107, and miR-103a-3p were significantly elevated in the metastatic group compared with the disease-free group (P value: 0.008, 0.005, and 0.050, respectively. The Cox proportional hazards regression analysis revealed that lymph node status and miR-27b-3p were independent predictors of poor prognosis (P value: 0.012 and 0.027, respectively. A logistic regression model was developed based on these two independent covariates, and the prognostic value of the model was subsequently confirmed in a separate validation set. The two different risk groups, which were stratified according to the model, showed significant differences in the rates of distant metastasis and breast cancer-related death not only in the training set (P value: 0.001 and 0.040, respectively but also in the validation set (P value: 0.013 and 0.012, respectively.This model based on miRNA and node status covariates may be used to stratify TNBC patients into different prognostic subgroups for potentially

  3. Depression and Delinquency Covariation in an Accelerated Longitudinal Sample of Adolescents

    Science.gov (United States)

    Kofler, Michael J.; McCart, Michael R.; Zajac, Kristyn; Ruggiero, Kenneth J.; Saunders, Benjamin E.; Kilpatrick, Dean G.

    2011-01-01

    Objectives: The current study tested opposing predictions stemming from the failure and acting out theories of depression-delinquency covariation. Method: Participants included a nationwide longitudinal sample of adolescents (N = 3,604) ages 12 to 17. Competing models were tested with cohort-sequential latent growth curve modeling to determine…

  4. Conformally covariant massless spin-two field equations

    International Nuclear Information System (INIS)

    Drew, M.S.; Gegenberg, J.D.

    1980-01-01

    An explicit proof is constructed to show that the field equations for a symmetric tensor field hsub(ab) describing massless spin-2 particles in Minkowski space-time are not covariant under the 15-parameter group SOsub(4,2); this group is usually associated with conformal transformations on flat space, and here it will be considered as a global gauge group which acts upon matter fields defined on space-time. Notwithstanding the above noncovariance, the equations governing the rank-4 tensor Ssub(abcd) constructed from hsub(ab) are shown to be covariant provided the contraction Ssub(ab) vanishes. Conformal covariance is proved by demonstrating the covariance of the equations for the equivalent 5-component complex field; in fact, covariance is proved for a general field equation applicable to massless particles of any spin >0. It is shown that the noncovariance of the hsub(ab) equations may be ascribed to the fact that the transformation behaviour of hsub(ab) is not the same as that of a field consisting of a gauge only. Since this is in contradistinction to the situation for the electromagnetic-field equations, the vector form of the electromagnetic equations is cast into a form which can be duplicated for the hsub(ab)-field. This procedure results in an alternative, covariant, field equation for hsub(ab). (author)

  5. A model for warfare in stratified small-scale societies: The effect of within-group inequality

    Science.gov (United States)

    Pandit, Sagar; van Schaik, Carel

    2017-01-01

    In order to predict the features of non-raiding human warfare in small-scale, socially stratified societies, we study a coalitionary model of war that assumes that individuals participate voluntarily because their decisions serve to maximize fitness. Individual males join the coalition if war results in a net economic and thus fitness benefit. Within the model, viable offensive war ensues if the attacking coalition of males can overpower the defending coalition. We assume that the two groups will eventually fuse after a victory, with ranks arranged according to the fighting abilities of all males and that the new group will adopt the winning group’s skew in fitness payoffs. We ask whether asymmetries in skew, group size and the amount of resources controlled by a group affect the likelihood of successful war. The model shows, other things being equal, that (i) egalitarian groups are more likely to defeat their more despotic enemies, even when these are stronger, (ii) defection to enemy groups will be rare, unless the attacked group is far more despotic than the attacking one, and (iii) genocidal war is likely under a variety of conditions, in particular when the group under attack is more egalitarian. This simple optimality model accords with several empirically observed correlations in human warfare. Its success underlines the important role of egalitarianism in warfare. PMID:29228014

  6. GLq(N)-covariant quantum algebras and covariant differential calculus

    International Nuclear Information System (INIS)

    Isaev, A.P.; Pyatov, P.N.

    1993-01-01

    We consider GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations. We show that, up to some inessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. The connection with the bicovariant differential calculus on the linear quantum groups is discussed. (orig.)

  7. Asset allocation with different covariance/correlation estimators

    OpenAIRE

    Μανταφούνη, Σοφία

    2007-01-01

    The subject of the study is to test whether the use of different covariance – correlation estimators than the historical covariance matrix that is widely used, would help in portfolio optimization through the mean-variance analysis. In other words, if an investor would like to use the mean-variance analysis in order to invest in assets like stocks or indices, would it be of some help to use more sophisticated estimators for the covariance matrix of the returns of his portfolio? The procedure ...

  8. Illustration of Step-Wise Latent Class Modeling With Covariates and Taxometric Analysis in Research Probing Children's Mental Models in Learning Sciences

    Directory of Open Access Journals (Sweden)

    Dimitrios Stamovlasis

    2018-04-01

    Full Text Available This paper illustrates two psychometric methods, latent class analysis (LCA and taxometric analysis (TA using empirical data from research probing children's mental representation in science learning. LCA is used to obtain a typology based on observed variables and to further investigate how the encountered classes might be related to external variables, where the effectiveness of classification process and the unbiased estimations of parameters become the main concern. In the step-wise LCA, the class membership is assigned and subsequently its relationship with covariates is established. This leading-edge modeling approach suffers from severe downward-biased estimations. The illustration of LCA is focused on alternative bias correction approaches and demonstrates the effect of modal and proportional class-membership assignment along with BCH and ML correction procedures. The illustration of LCA is presented with three covariates, which are psychometric variables operationalizing formal reasoning, divergent thinking and field dependence-independence, respectively. Moreover, taxometric analysis, a method designed to detect the type of the latent structural model, categorical or dimensional, is introduced, along with the relevant basic concepts and tools. TA was applied complementarily in the same data sets to answer the fundamental hypothesis about children's naïve knowledge on the matters under study and it comprises an additional asset in building theory which is fundamental for educational practices. Taxometric analysis provided results that were ambiguous as far as the type of the latent structure. This finding initiates further discussion and sets a problematization within this framework rethinking fundamental assumptions and epistemological issues.

  9. Illustration of Step-Wise Latent Class Modeling With Covariates and Taxometric Analysis in Research Probing Children's Mental Models in Learning Sciences.

    Science.gov (United States)

    Stamovlasis, Dimitrios; Papageorgiou, George; Tsitsipis, Georgios; Tsikalas, Themistoklis; Vaiopoulou, Julie

    2018-01-01

    This paper illustrates two psychometric methods, latent class analysis (LCA) and taxometric analysis (TA) using empirical data from research probing children's mental representation in science learning. LCA is used to obtain a typology based on observed variables and to further investigate how the encountered classes might be related to external variables, where the effectiveness of classification process and the unbiased estimations of parameters become the main concern. In the step-wise LCA, the class membership is assigned and subsequently its relationship with covariates is established. This leading-edge modeling approach suffers from severe downward-biased estimations. The illustration of LCA is focused on alternative bias correction approaches and demonstrates the effect of modal and proportional class-membership assignment along with BCH and ML correction procedures. The illustration of LCA is presented with three covariates, which are psychometric variables operationalizing formal reasoning, divergent thinking and field dependence-independence, respectively. Moreover, taxometric analysis, a method designed to detect the type of the latent structural model, categorical or dimensional, is introduced, along with the relevant basic concepts and tools. TA was applied complementarily in the same data sets to answer the fundamental hypothesis about children's naïve knowledge on the matters under study and it comprises an additional asset in building theory which is fundamental for educational practices. Taxometric analysis provided results that were ambiguous as far as the type of the latent structure. This finding initiates further discussion and sets a problematization within this framework rethinking fundamental assumptions and epistemological issues.

  10. Construction and use of gene expression covariation matrix

    Directory of Open Access Journals (Sweden)

    Bellis Michel

    2009-07-01

    strings of symbols. Conclusion This new method, applied to four different large data sets, has allowed us to construct distinct covariation matrices with similar properties. We have also developed a technique to translate these covariation networks into graphical 3D representations and found that the local assignation of the probe sets was conserved across the four chip set models used which encompass three different species (humans, mice, and rats. The application of adapted clustering methods succeeded in delineating six conserved functional regions that we characterized using Gene Ontology information.

  11. Moderating the Covariance Between Family Member’s Substance Use Behavior

    Science.gov (United States)

    Eaves, Lindon J.; Neale, Michael C.

    2014-01-01

    Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox. PMID:24647834

  12. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  13. Students’ Covariational Reasoning in Solving Integrals’ Problems

    Science.gov (United States)

    Harini, N. V.; Fuad, Y.; Ekawati, R.

    2018-01-01

    Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.

  14. Covariance upperbound controllers for networked control systems

    International Nuclear Information System (INIS)

    Ko, Sang Ho

    2012-01-01

    This paper deals with designing covariance upperbound controllers for a linear system that can be used in a networked control environment in which control laws are calculated in a remote controller and transmitted through a shared communication link to the plant. In order to compensate for possible packet losses during the transmission, two different techniques are often employed: the zero-input and the hold-input strategy. These use zero input and the latest control input, respectively, when a packet is lost. For each strategy, we synthesize a class of output covariance upperbound controllers for a given covariance upperbound and a packet loss probability. Existence conditions of the covariance upperbound controller are also provided for each strategy. Through numerical examples, performance of the two strategies is compared in terms of feasibility of implementing the controllers

  15. ERRORJ. Covariance processing code system for JENDL. Version 2

    International Nuclear Information System (INIS)

    Chiba, Gou

    2003-09-01

    ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)

  16. A Note on the Power Provided by Sibships of Sizes 2, 3, and 4 in Genetic Covariance Modeling of a Codominant QTL.

    NARCIS (Netherlands)

    Dolan, C.V.; Boomsma, D.I.; Neale, M.C.

    1999-01-01

    The contribution of size 3 and size 4 sibships to power in covariance structure modeling of a codominant QTL is investigated. Power calculations are based on the noncentral chi-square distribution. Sixteen sets of parameter values are considered. Results indicate that size 3 and size 4 sibships

  17. ACORNS, Covariance and Correlation Matrix Diagonalization

    International Nuclear Information System (INIS)

    Szondi, E.J.

    1990-01-01

    1 - Description of program or function: The program allows the user to verify the different types of covariance/correlation matrices used in the activation neutron spectrometry. 2 - Method of solution: The program performs the diagonalization of the input covariance/relative covariance/correlation matrices. The Eigen values are then analyzed to determine the rank of the matrices. If the Eigen vectors of the pertinent correlation matrix have also been calculated, the program can perform a complete factor analysis (generation of the factor matrix and its rotation in Kaiser's 'varimax' sense to select the origin of the correlations). 3 - Restrictions on the complexity of the problem: Matrix size is limited to 60 on PDP and to 100 on IBM PC/AT

  18. Characterization and modeling of turbidity density plume induced into stratified reservoir by flood runoffs.

    Science.gov (United States)

    Chung, S W; Lee, H S

    2009-01-01

    In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.

  19. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  20. The Stratified Legitimacy of Abortions.

    Science.gov (United States)

    Kimport, Katrina; Weitz, Tracy A; Freedman, Lori

    2016-12-01

    Roe v. Wade was heralded as an end to unequal access to abortion care in the United States. However, today, despite being common and safe, abortion is performed only selectively in hospitals and private practices. Drawing on 61 interviews with obstetrician-gynecologists in these settings, we examine how they determine which abortions to perform. We find that they distinguish between more and less legitimate abortions, producing a narrative of stratified legitimacy that privileges abortions for intended pregnancies, when the fetus is unhealthy, and when women perform normative gendered sexuality, including distress about the abortion, guilt about failure to contracept, and desire for motherhood. This stratified legitimacy can perpetuate socially-inflected inequality of access and normative gendered sexuality. Additionally, we argue that the practice by physicians of distinguishing among abortions can legitimate legislative practices that regulate and restrict some kinds of abortion, further constraining abortion access. © American Sociological Association 2016.